Jan 06 13:59:40 crc systemd[1]: Starting Kubernetes Kubelet... Jan 06 13:59:40 crc restorecon[4747]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:40 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 06 13:59:41 crc restorecon[4747]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 06 13:59:41 crc kubenswrapper[4869]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.555980 4869 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561162 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561192 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561199 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561205 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561211 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561215 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561219 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561225 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561230 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561234 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561238 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561243 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561249 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561254 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561259 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561263 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561267 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561271 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561276 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561280 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561284 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561288 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561292 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561296 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561300 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561305 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561309 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561322 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561326 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561330 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561334 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561337 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561341 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561345 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561349 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561353 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561356 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561360 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561364 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561367 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561371 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561374 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561378 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561381 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561384 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561391 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561395 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561399 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561402 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561405 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561410 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561415 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561419 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561423 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561427 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561431 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561434 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561438 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561441 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561445 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561448 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561452 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561455 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561458 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561462 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561466 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561470 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561473 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561477 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561480 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.561484 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561737 4869 flags.go:64] FLAG: --address="0.0.0.0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561749 4869 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561758 4869 flags.go:64] FLAG: --anonymous-auth="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561763 4869 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561769 4869 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561793 4869 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561800 4869 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561807 4869 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561812 4869 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561817 4869 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561822 4869 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561828 4869 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561833 4869 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561837 4869 flags.go:64] FLAG: --cgroup-root="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561843 4869 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561848 4869 flags.go:64] FLAG: --client-ca-file="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561853 4869 flags.go:64] FLAG: --cloud-config="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561857 4869 flags.go:64] FLAG: --cloud-provider="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561861 4869 flags.go:64] FLAG: --cluster-dns="[]" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561867 4869 flags.go:64] FLAG: --cluster-domain="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561872 4869 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561876 4869 flags.go:64] FLAG: --config-dir="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561880 4869 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561885 4869 flags.go:64] FLAG: --container-log-max-files="5" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561890 4869 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561894 4869 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561899 4869 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561903 4869 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561908 4869 flags.go:64] FLAG: --contention-profiling="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561912 4869 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561916 4869 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561920 4869 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561924 4869 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561930 4869 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561934 4869 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561938 4869 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561942 4869 flags.go:64] FLAG: --enable-load-reader="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561946 4869 flags.go:64] FLAG: --enable-server="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561950 4869 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561959 4869 flags.go:64] FLAG: --event-burst="100" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561964 4869 flags.go:64] FLAG: --event-qps="50" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561968 4869 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561972 4869 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561976 4869 flags.go:64] FLAG: --eviction-hard="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561981 4869 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561985 4869 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561990 4869 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561995 4869 flags.go:64] FLAG: --eviction-soft="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.561999 4869 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562003 4869 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562008 4869 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562012 4869 flags.go:64] FLAG: --experimental-mounter-path="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562016 4869 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562020 4869 flags.go:64] FLAG: --fail-swap-on="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562025 4869 flags.go:64] FLAG: --feature-gates="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562031 4869 flags.go:64] FLAG: --file-check-frequency="20s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562035 4869 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562039 4869 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562044 4869 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562048 4869 flags.go:64] FLAG: --healthz-port="10248" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562052 4869 flags.go:64] FLAG: --help="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562056 4869 flags.go:64] FLAG: --hostname-override="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562060 4869 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562064 4869 flags.go:64] FLAG: --http-check-frequency="20s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562070 4869 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562074 4869 flags.go:64] FLAG: --image-credential-provider-config="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562078 4869 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562083 4869 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562087 4869 flags.go:64] FLAG: --image-service-endpoint="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562091 4869 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562095 4869 flags.go:64] FLAG: --kube-api-burst="100" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562100 4869 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562104 4869 flags.go:64] FLAG: --kube-api-qps="50" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562109 4869 flags.go:64] FLAG: --kube-reserved="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562113 4869 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562117 4869 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562121 4869 flags.go:64] FLAG: --kubelet-cgroups="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562125 4869 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562129 4869 flags.go:64] FLAG: --lock-file="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562133 4869 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562137 4869 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562141 4869 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562153 4869 flags.go:64] FLAG: --log-json-split-stream="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562157 4869 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562162 4869 flags.go:64] FLAG: --log-text-split-stream="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562166 4869 flags.go:64] FLAG: --logging-format="text" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562170 4869 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562175 4869 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562179 4869 flags.go:64] FLAG: --manifest-url="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562183 4869 flags.go:64] FLAG: --manifest-url-header="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562189 4869 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562193 4869 flags.go:64] FLAG: --max-open-files="1000000" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562198 4869 flags.go:64] FLAG: --max-pods="110" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562203 4869 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562207 4869 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562211 4869 flags.go:64] FLAG: --memory-manager-policy="None" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562216 4869 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562220 4869 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562224 4869 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562228 4869 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562238 4869 flags.go:64] FLAG: --node-status-max-images="50" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562242 4869 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562247 4869 flags.go:64] FLAG: --oom-score-adj="-999" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562251 4869 flags.go:64] FLAG: --pod-cidr="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562255 4869 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562262 4869 flags.go:64] FLAG: --pod-manifest-path="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562267 4869 flags.go:64] FLAG: --pod-max-pids="-1" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562271 4869 flags.go:64] FLAG: --pods-per-core="0" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562276 4869 flags.go:64] FLAG: --port="10250" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562280 4869 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562284 4869 flags.go:64] FLAG: --provider-id="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562289 4869 flags.go:64] FLAG: --qos-reserved="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562293 4869 flags.go:64] FLAG: --read-only-port="10255" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562297 4869 flags.go:64] FLAG: --register-node="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562302 4869 flags.go:64] FLAG: --register-schedulable="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562306 4869 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562315 4869 flags.go:64] FLAG: --registry-burst="10" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562319 4869 flags.go:64] FLAG: --registry-qps="5" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562323 4869 flags.go:64] FLAG: --reserved-cpus="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562328 4869 flags.go:64] FLAG: --reserved-memory="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562334 4869 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562338 4869 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562342 4869 flags.go:64] FLAG: --rotate-certificates="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562346 4869 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562355 4869 flags.go:64] FLAG: --runonce="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562359 4869 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562363 4869 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562367 4869 flags.go:64] FLAG: --seccomp-default="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562371 4869 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562375 4869 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562380 4869 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562384 4869 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562389 4869 flags.go:64] FLAG: --storage-driver-password="root" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562393 4869 flags.go:64] FLAG: --storage-driver-secure="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562397 4869 flags.go:64] FLAG: --storage-driver-table="stats" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562401 4869 flags.go:64] FLAG: --storage-driver-user="root" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562405 4869 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562409 4869 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562413 4869 flags.go:64] FLAG: --system-cgroups="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562417 4869 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562424 4869 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562428 4869 flags.go:64] FLAG: --tls-cert-file="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562432 4869 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562437 4869 flags.go:64] FLAG: --tls-min-version="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562441 4869 flags.go:64] FLAG: --tls-private-key-file="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562445 4869 flags.go:64] FLAG: --topology-manager-policy="none" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562449 4869 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562453 4869 flags.go:64] FLAG: --topology-manager-scope="container" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562457 4869 flags.go:64] FLAG: --v="2" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562462 4869 flags.go:64] FLAG: --version="false" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562468 4869 flags.go:64] FLAG: --vmodule="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562472 4869 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562477 4869 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562579 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562585 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562592 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562600 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562604 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562609 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562614 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562618 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562622 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562627 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562632 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562636 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562640 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562644 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562648 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562653 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562657 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562686 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562691 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562695 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562700 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562705 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562710 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562714 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562718 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562722 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562726 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562732 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562737 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562742 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562746 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562752 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562757 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562761 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562765 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562773 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562777 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562783 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562789 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562796 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562801 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562806 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562811 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562816 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562820 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562825 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562830 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562835 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562839 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562844 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562848 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562852 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562857 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562861 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562866 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562870 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562875 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562880 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562884 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562889 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562893 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562924 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562930 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562941 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562946 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562951 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562956 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562964 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562969 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562973 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.562978 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.562992 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.573014 4869 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.573059 4869 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573153 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573166 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573172 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573177 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573182 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573187 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573192 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573197 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573202 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573206 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573213 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573222 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573227 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573232 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573236 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573241 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573246 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573250 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573255 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573260 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573265 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573269 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573273 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573278 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573284 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573290 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573296 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573301 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573306 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573311 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573315 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573320 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573324 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573329 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573335 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573340 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573346 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573351 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573356 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573360 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573365 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573369 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573373 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573378 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573382 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573387 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573391 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573396 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573400 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573405 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573409 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573415 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573421 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573426 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573433 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573438 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573443 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573447 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573452 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573456 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573462 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573466 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573471 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573476 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573480 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573485 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573489 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573494 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573499 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573503 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573509 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.573517 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573692 4869 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573704 4869 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573712 4869 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573718 4869 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573745 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573751 4869 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573756 4869 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573761 4869 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573766 4869 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573772 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573777 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573782 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573786 4869 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573791 4869 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573796 4869 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573801 4869 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573806 4869 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573810 4869 feature_gate.go:330] unrecognized feature gate: Example Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573815 4869 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573819 4869 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573824 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573829 4869 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573833 4869 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573838 4869 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573844 4869 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573851 4869 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573858 4869 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573864 4869 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573870 4869 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573875 4869 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573880 4869 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573884 4869 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573889 4869 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573894 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573899 4869 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573905 4869 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573909 4869 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573914 4869 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573919 4869 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573924 4869 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573929 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573933 4869 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573937 4869 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573942 4869 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573946 4869 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573950 4869 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573955 4869 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573959 4869 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573965 4869 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573969 4869 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573974 4869 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573979 4869 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573983 4869 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573988 4869 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573992 4869 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.573996 4869 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574001 4869 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574005 4869 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574010 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574014 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574018 4869 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574023 4869 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574027 4869 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574032 4869 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574036 4869 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574040 4869 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574045 4869 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574049 4869 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574053 4869 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574059 4869 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.574067 4869 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.574076 4869 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.574319 4869 server.go:940] "Client rotation is on, will bootstrap in background" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.577404 4869 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.577514 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.578126 4869 server.go:997] "Starting client certificate rotation" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.578155 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.578573 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-05 04:33:42.061289898 +0000 UTC Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.578715 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.583433 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.585012 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.585149 4869 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.590935 4869 log.go:25] "Validated CRI v1 runtime API" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.607220 4869 log.go:25] "Validated CRI v1 image API" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.608933 4869 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.611300 4869 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-06-13-55-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.611339 4869 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.630773 4869 manager.go:217] Machine: {Timestamp:2026-01-06 13:59:41.629250827 +0000 UTC m=+0.168938521 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:7374d6af-17bd-430d-99ca-aaf4c2e05545 BootID:efa88f90-2f2b-4bd6-b8cc-4623e7e87b81 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:be:19:e8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:be:19:e8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f8:80:0f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ff:ad:ec Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4f:23:1c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d6:a7:e6 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:32:bd:f8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1e:48:4c:82:b8:ed Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:f7:17:9e:a4:9d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.631065 4869 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.631225 4869 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.631964 4869 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632207 4869 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632247 4869 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632481 4869 topology_manager.go:138] "Creating topology manager with none policy" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632493 4869 container_manager_linux.go:303] "Creating device plugin manager" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632764 4869 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.632802 4869 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.633056 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.633203 4869 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.634365 4869 kubelet.go:418] "Attempting to sync node with API server" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.634388 4869 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.634405 4869 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.634419 4869 kubelet.go:324] "Adding apiserver pod source" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.634434 4869 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.636226 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.636368 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.636394 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.636505 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.637178 4869 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.637568 4869 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.638465 4869 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639094 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639138 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639148 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639158 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639181 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639192 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639201 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639218 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639229 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639238 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639263 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.639273 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.640047 4869 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.640478 4869 server.go:1280] "Started kubelet" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.640827 4869 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.640810 4869 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.641477 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:41 crc systemd[1]: Started Kubernetes Kubelet. Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.642521 4869 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.644562 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1888290b284d5824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-06 13:59:41.640456228 +0000 UTC m=+0.180143902,LastTimestamp:2026-01-06 13:59:41.640456228 +0000 UTC m=+0.180143902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.649461 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.649503 4869 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.649582 4869 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.649609 4869 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.649554 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:00:04.638237819 +0000 UTC Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.650193 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.650475 4869 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.650466 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.650651 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.651222 4869 server.go:460] "Adding debug handlers to kubelet server" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.651889 4869 factory.go:153] Registering CRI-O factory Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.651923 4869 factory.go:221] Registration of the crio container factory successfully Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.652004 4869 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.652024 4869 factory.go:55] Registering systemd factory Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.652043 4869 factory.go:221] Registration of the systemd container factory successfully Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.651879 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="200ms" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.652078 4869 factory.go:103] Registering Raw factory Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.652136 4869 manager.go:1196] Started watching for new ooms in manager Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.653087 4869 manager.go:319] Starting recovery of all containers Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.657960 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658010 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658027 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658041 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658056 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658068 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658080 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658520 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.658538 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.659491 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662054 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662088 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662104 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662123 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662138 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662150 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662166 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662178 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662192 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662258 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662277 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662290 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662307 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662322 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662334 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662346 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662362 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662378 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662393 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662408 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662423 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662439 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662454 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662467 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662479 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662493 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662509 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662523 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662540 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662554 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662567 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662581 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662596 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662610 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662623 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662636 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662650 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662716 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662729 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662742 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662754 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662768 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662786 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662802 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662817 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662833 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662850 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662863 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662875 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662891 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662905 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662920 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662936 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662950 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662965 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662979 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.662993 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663007 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663021 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663036 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663049 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663063 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663075 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663088 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663100 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663114 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663128 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663142 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663157 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663170 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663225 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663243 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663258 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663273 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663287 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663300 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663313 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663326 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663339 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663352 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663366 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663380 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663394 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663407 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663422 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663443 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663457 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663472 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663487 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663500 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663515 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663528 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663542 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663556 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663575 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663591 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663616 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663632 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663646 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663685 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663703 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663717 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663732 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663746 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663759 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663773 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663790 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663804 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663817 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663830 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663845 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663858 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663872 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663886 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663898 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663913 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663928 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663941 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663954 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663966 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663980 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.663992 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664005 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664016 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664029 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664042 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664054 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664067 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664080 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664092 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664104 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664117 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664131 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664143 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.664156 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670407 4869 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670865 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670891 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670917 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670931 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670949 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670969 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.670983 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671004 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671017 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671031 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671052 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671066 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671084 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671099 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671112 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671131 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671144 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671163 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671178 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671194 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671213 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671228 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671246 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671258 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671273 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671291 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671306 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671325 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671342 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671357 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671376 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671389 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671404 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671424 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671438 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671456 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671469 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671483 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671501 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671515 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671562 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671578 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671594 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671613 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671627 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671645 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671679 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671694 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671710 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671723 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671741 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671753 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671765 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671782 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671795 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671810 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671825 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671837 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671854 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671867 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671880 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671914 4869 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671927 4869 reconstruct.go:97] "Volume reconstruction finished" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.671937 4869 reconciler.go:26] "Reconciler: start to sync state" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.677809 4869 manager.go:324] Recovery completed Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.687419 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.689420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.689460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.689474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.690184 4869 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.690198 4869 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.690217 4869 state_mem.go:36] "Initialized new in-memory state store" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.700684 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.702933 4869 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.703042 4869 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.703098 4869 kubelet.go:2335] "Starting kubelet main sync loop" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.703198 4869 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 06 13:59:41 crc kubenswrapper[4869]: W0106 13:59:41.710268 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.710346 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.718683 4869 policy_none.go:49] "None policy: Start" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.719492 4869 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.719523 4869 state_mem.go:35] "Initializing new in-memory state store" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.750632 4869 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.771740 4869 manager.go:334] "Starting Device Plugin manager" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.771834 4869 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.771854 4869 server.go:79] "Starting device plugin registration server" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.772323 4869 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.772340 4869 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.772689 4869 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.772847 4869 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.772862 4869 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.785362 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.803768 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.803927 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.805777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.805841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.805854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.806120 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.806326 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.806369 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.807784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.807819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.807833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811280 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.811510 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812225 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812377 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.812999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813012 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813152 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813257 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813869 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.813898 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814280 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.814494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.853220 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="400ms" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.872426 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873291 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873310 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873397 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873492 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873575 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: E0106 13:59:41.873647 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.873989 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.874026 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975211 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975260 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975310 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975329 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975367 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975445 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975465 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975517 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975560 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975596 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975565 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975502 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975691 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975834 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975782 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975780 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975772 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975905 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975799 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975922 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:41 crc kubenswrapper[4869]: I0106 13:59:41.975947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.074069 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.076347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.076469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.076538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.076622 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.077408 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.097475 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1888290b284d5824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-06 13:59:41.640456228 +0000 UTC m=+0.180143902,LastTimestamp:2026-01-06 13:59:41.640456228 +0000 UTC m=+0.180143902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.152025 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.162689 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.172690 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b3355e6cc739921fdc85dadaefa8d65a409c69074ce787a158faa1f241b3f891 WatchSource:0}: Error finding container b3355e6cc739921fdc85dadaefa8d65a409c69074ce787a158faa1f241b3f891: Status 404 returned error can't find the container with id b3355e6cc739921fdc85dadaefa8d65a409c69074ce787a158faa1f241b3f891 Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.178708 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78 WatchSource:0}: Error finding container 5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78: Status 404 returned error can't find the container with id 5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78 Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.199434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.229148 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.236123 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.239440 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ed64b5aaf5d53cd00ddee48c91e8ea4520b2d2d1449e43bf493f6ddd9c9540c1 WatchSource:0}: Error finding container ed64b5aaf5d53cd00ddee48c91e8ea4520b2d2d1449e43bf493f6ddd9c9540c1: Status 404 returned error can't find the container with id ed64b5aaf5d53cd00ddee48c91e8ea4520b2d2d1449e43bf493f6ddd9c9540c1 Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.249535 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e5c2e7493e6dd99357f27c712d68cd8a8434b9f077905295ed5bd12da49b65d7 WatchSource:0}: Error finding container e5c2e7493e6dd99357f27c712d68cd8a8434b9f077905295ed5bd12da49b65d7: Status 404 returned error can't find the container with id e5c2e7493e6dd99357f27c712d68cd8a8434b9f077905295ed5bd12da49b65d7 Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.254406 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="800ms" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.478722 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.480106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.480155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.480169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.480196 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.480713 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.554282 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.554636 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.643353 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.650515 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:29:30.243372908 +0000 UTC Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.650621 4869 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 210h29m47.592755559s for next certificate rotation Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.709065 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="72429889a2dcc18c16f668309e6ac6c21d58ff9d3fd6496cebacf51ad9dc6b39" exitCode=0 Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.709142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"72429889a2dcc18c16f668309e6ac6c21d58ff9d3fd6496cebacf51ad9dc6b39"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.709319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e12a8fda211a7a55067597dd7c48fc12f142d4dff1091fbeafdfb6c14cf03e0a"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.709450 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.710803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.710870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.710870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.710917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.710930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.712145 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3246a1d845db1446213cb7c8735cde4f0a9e4652039096429161aa9a80241a75" exitCode=0 Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.712231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3246a1d845db1446213cb7c8735cde4f0a9e4652039096429161aa9a80241a75"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.712274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b3355e6cc739921fdc85dadaefa8d65a409c69074ce787a158faa1f241b3f891"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.712350 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713799 4869 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74" exitCode=0 Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713850 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e5c2e7493e6dd99357f27c712d68cd8a8434b9f077905295ed5bd12da49b65d7"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.713922 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.714825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.714842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.714850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.715642 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6" exitCode=0 Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.715688 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.715734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed64b5aaf5d53cd00ddee48c91e8ea4520b2d2d1449e43bf493f6ddd9c9540c1"} Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.715805 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.716695 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.716726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.716735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.718866 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.720030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.720080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:42 crc kubenswrapper[4869]: I0106 13:59:42.720094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:42 crc kubenswrapper[4869]: W0106 13:59:42.768772 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:42 crc kubenswrapper[4869]: E0106 13:59:42.768858 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:43 crc kubenswrapper[4869]: W0106 13:59:43.052946 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:43 crc kubenswrapper[4869]: E0106 13:59:43.053047 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:43 crc kubenswrapper[4869]: E0106 13:59:43.055800 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="1.6s" Jan 06 13:59:43 crc kubenswrapper[4869]: W0106 13:59:43.149385 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 06 13:59:43 crc kubenswrapper[4869]: E0106 13:59:43.149500 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.280860 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.282575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.282647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.282689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.282731 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 13:59:43 crc kubenswrapper[4869]: E0106 13:59:43.283446 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.671719 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.723597 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.723570 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"deaac0c586758b6d65136709b9b1b309d79fece37befc229d2ae12d6f9c5689c"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.723696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb0ea65794fd1d3b4054df7550af03061731938142e02ae6df80d5af06db918d"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.723723 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2745854b54c994b4e48e734a8faa69f6b7f551ecd1554e325921df87f95eff36"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.724592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.724630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.724645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.727792 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"421b774c5b400463d2d328c730de028c4d58793ba0e55b35fda0f96ffff6d0f9"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.727896 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.728527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.728569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.728579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730104 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730217 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.730909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.732701 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.732743 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.732758 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.732767 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734054 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f7455d52ddd31953ed0ee645ab2f203e084dcd080070e7b58f37dfd0d9c22b7e" exitCode=0 Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f7455d52ddd31953ed0ee645ab2f203e084dcd080070e7b58f37dfd0d9c22b7e"} Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734182 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:43 crc kubenswrapper[4869]: I0106 13:59:43.734811 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.741721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd"} Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.741849 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.746426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.746526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.746559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.750912 4869 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="324e9f398a7d079e8655da4c686ad049062895888e52e865a46667805c6f3bad" exitCode=0 Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.750983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"324e9f398a7d079e8655da4c686ad049062895888e52e865a46667805c6f3bad"} Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.751061 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.751276 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.751939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.751968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.751978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.752579 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.752626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.752645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.771992 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.884351 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.885879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.885933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.885946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:44 crc kubenswrapper[4869]: I0106 13:59:44.885976 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759041 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759099 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"72a3fa0b5d3a32b0552173a1e629ba10e079ba4d8653e5309b464a144b42bf6d"} Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a3c704c3d9d1f645327763d8c21154f8d48539f09d4b6a5320c10d1a541341b"} Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ef441fd9ad45f95046deee5f5bee3196e21b25e95d6c9dccb5fed6ca8f20cce7"} Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.759256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"53599a23a048fbf15dc025947dcd86295c82ff4e3c5f7435e0995e2e9d2fec8a"} Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.760236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.760289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:45 crc kubenswrapper[4869]: I0106 13:59:45.760302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.769207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"549f8f8f1d1641ed7b602c9dce9eb9fc3f34eba2b6a1538b80f9121efaaa7b23"} Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.769225 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.769360 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.769411 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.771720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:46 crc kubenswrapper[4869]: I0106 13:59:46.797346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.015494 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.588851 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.589038 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.590527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.590604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.590625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.772227 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.772307 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:47 crc kubenswrapper[4869]: I0106 13:59:47.774243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.241890 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.242059 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.243199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.243235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.243244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.774987 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.776121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.776152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.776161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.855838 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.856031 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.857530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.857595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:48 crc kubenswrapper[4869]: I0106 13:59:48.857614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:50 crc kubenswrapper[4869]: I0106 13:59:50.589349 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 13:59:50 crc kubenswrapper[4869]: I0106 13:59:50.589449 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 13:59:51 crc kubenswrapper[4869]: I0106 13:59:51.455878 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:51 crc kubenswrapper[4869]: I0106 13:59:51.456086 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:51 crc kubenswrapper[4869]: I0106 13:59:51.457638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:51 crc kubenswrapper[4869]: I0106 13:59:51.457719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:51 crc kubenswrapper[4869]: I0106 13:59:51.457741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:51 crc kubenswrapper[4869]: E0106 13:59:51.785549 4869 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 06 13:59:52 crc kubenswrapper[4869]: I0106 13:59:52.994549 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:52 crc kubenswrapper[4869]: I0106 13:59:52.994826 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:52 crc kubenswrapper[4869]: I0106 13:59:52.996481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:52 crc kubenswrapper[4869]: I0106 13:59:52.996530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:52 crc kubenswrapper[4869]: I0106 13:59:52.996546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.092336 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.103546 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.643615 4869 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 06 13:59:53 crc kubenswrapper[4869]: E0106 13:59:53.673318 4869 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.787861 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.788960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.788992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.789002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:53 crc kubenswrapper[4869]: I0106 13:59:53.792188 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:54 crc kubenswrapper[4869]: W0106 13:59:54.435030 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.435140 4869 trace.go:236] Trace[1442980439]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Jan-2026 13:59:44.433) (total time: 10001ms): Jan 06 13:59:54 crc kubenswrapper[4869]: Trace[1442980439]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:59:54.435) Jan 06 13:59:54 crc kubenswrapper[4869]: Trace[1442980439]: [10.001807528s] [10.001807528s] END Jan 06 13:59:54 crc kubenswrapper[4869]: E0106 13:59:54.435165 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 06 13:59:54 crc kubenswrapper[4869]: W0106 13:59:54.578351 4869 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.578456 4869 trace.go:236] Trace[1651245022]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Jan-2026 13:59:44.577) (total time: 10001ms): Jan 06 13:59:54 crc kubenswrapper[4869]: Trace[1651245022]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:59:54.578) Jan 06 13:59:54 crc kubenswrapper[4869]: Trace[1651245022]: [10.001268524s] [10.001268524s] END Jan 06 13:59:54 crc kubenswrapper[4869]: E0106 13:59:54.578487 4869 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.619935 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.620014 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.626485 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.626557 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.798792 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]log ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]etcd ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-filter ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-apiextensions-informers ok Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/crd-informer-synced failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-system-namespaces-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/bootstrap-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/start-kube-aggregator-informers ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 06 13:59:54 crc kubenswrapper[4869]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]autoregister-completion ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/apiservice-openapi-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 06 13:59:54 crc kubenswrapper[4869]: livez check failed Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.798853 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.799245 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.800159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.800180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:54 crc kubenswrapper[4869]: I0106 13:59:54.800191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.802284 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.803210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.803244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.803256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.957558 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.957873 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.959689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.959736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.959749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:55 crc kubenswrapper[4869]: I0106 13:59:55.995318 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 06 13:59:56 crc kubenswrapper[4869]: I0106 13:59:56.805517 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:56 crc kubenswrapper[4869]: I0106 13:59:56.807131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:56 crc kubenswrapper[4869]: I0106 13:59:56.807230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:56 crc kubenswrapper[4869]: I0106 13:59:56.807260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:56 crc kubenswrapper[4869]: I0106 13:59:56.827084 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.808581 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.810871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.810948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.810969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.827168 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.932270 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 06 13:59:57 crc kubenswrapper[4869]: I0106 13:59:57.953285 4869 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 06 13:59:58 crc kubenswrapper[4869]: I0106 13:59:58.006162 4869 csr.go:261] certificate signing request csr-g5pt6 is approved, waiting to be issued Jan 06 13:59:58 crc kubenswrapper[4869]: I0106 13:59:58.023417 4869 csr.go:257] certificate signing request csr-g5pt6 is issued Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.025296 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-06 13:54:58 +0000 UTC, rotation deadline is 2026-11-27 16:51:39.05982367 +0000 UTC Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.025348 4869 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7802h51m40.03447857s for next certificate rotation Jan 06 13:59:59 crc kubenswrapper[4869]: E0106 13:59:59.590979 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.593461 4869 trace.go:236] Trace[280002185]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Jan-2026 13:59:45.331) (total time: 14262ms): Jan 06 13:59:59 crc kubenswrapper[4869]: Trace[280002185]: ---"Objects listed" error: 14262ms (13:59:59.593) Jan 06 13:59:59 crc kubenswrapper[4869]: Trace[280002185]: [14.262254144s] [14.262254144s] END Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.593506 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.594615 4869 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 06 13:59:59 crc kubenswrapper[4869]: E0106 13:59:59.602630 4869 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.603432 4869 trace.go:236] Trace[583033067]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Jan-2026 13:59:45.956) (total time: 13647ms): Jan 06 13:59:59 crc kubenswrapper[4869]: Trace[583033067]: ---"Objects listed" error: 13646ms (13:59:59.603) Jan 06 13:59:59 crc kubenswrapper[4869]: Trace[583033067]: [13.64700867s] [13.64700867s] END Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.603485 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.681552 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.681764 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.683210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.683310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.683327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.696652 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.716310 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.716611 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.718113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.718163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.718180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.783191 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.814200 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.815720 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd" exitCode=255 Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.815854 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.816452 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd"} Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817418 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.817986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.818335 4869 scope.go:117] "RemoveContainer" containerID="cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd" Jan 06 13:59:59 crc kubenswrapper[4869]: I0106 13:59:59.823396 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.819616 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.821296 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1"} Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.821450 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.821502 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.822360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.822406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.822417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:00 crc kubenswrapper[4869]: I0106 14:00:00.975728 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.579293 4869 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 06 14:00:01 crc kubenswrapper[4869]: W0106 14:00:01.579615 4869 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.643749 4869 apiserver.go:52] "Watching apiserver" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.648217 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.648791 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kt9df","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-tlkdn","openshift-multus/multus-additional-cni-plugins-4b8g7","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-2f9tq","openshift-multus/multus-68bvk"] Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.649372 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650061 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650112 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.650288 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.650295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650335 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650314 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650411 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.650506 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.651477 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.650949 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.658286 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.658535 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.658733 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.658909 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.659597 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.660342 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.660355 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.660701 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.660985 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661137 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661185 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661190 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661374 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661466 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661567 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661589 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.661631 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.663155 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.663185 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.663509 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.663581 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.664107 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.664400 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.664730 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.665462 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.665946 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.666314 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.666876 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.667626 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.667983 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.676859 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.690057 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.705643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.709842 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-hostroot\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.709891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv4sr\" (UniqueName: \"kubernetes.io/projected/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-kube-api-access-xv4sr\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.709915 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-system-cni-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710087 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-multus-certs\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710312 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710403 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710420 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-k8s-cni-cncf-io\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89b72572-a31b-48f1-93f4-cbfad03736b1-proxy-tls\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.710531 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710586 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-socket-dir-parent\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710605 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-netns\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-daemon-config\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710654 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710707 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710726 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhcnr\" (UniqueName: \"kubernetes.io/projected/89b72572-a31b-48f1-93f4-cbfad03736b1-kube-api-access-lhcnr\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710828 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-etc-kubernetes\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710856 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710873 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-bin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710889 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-kubelet\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc24f\" (UniqueName: \"kubernetes.io/projected/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-kube-api-access-nc24f\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-multus\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.710977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89b72572-a31b-48f1-93f4-cbfad03736b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711002 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-binary-copy\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711037 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711108 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cni-binary-copy\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711216 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711233 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711271 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-hosts-file\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711288 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-system-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711321 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711341 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksmj\" (UniqueName: \"kubernetes.io/projected/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-kube-api-access-bksmj\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711380 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-os-release\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.711808 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:02.211765041 +0000 UTC m=+20.751452725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.711839 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.712283 4869 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cnibin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-conf-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713464 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cnibin\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713520 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-os-release\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713526 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713554 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713731 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713766 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/89b72572-a31b-48f1-93f4-cbfad03736b1-rootfs\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713835 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.713934 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.713946 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857xw\" (UniqueName: \"kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.714009 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:02.213985152 +0000 UTC m=+20.753672986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.726854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.727156 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.729050 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.729085 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.729099 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.729168 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:02.229149585 +0000 UTC m=+20.768837249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.729281 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.736510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.737559 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.738040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.738086 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.738110 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.738123 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.738199 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:02.23817958 +0000 UTC m=+20.777867244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.739066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.749589 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.752407 4869 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.763466 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.774176 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.784373 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.793117 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.809785 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815160 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815195 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815252 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815283 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815332 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815395 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815513 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815712 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815701 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815740 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815763 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.815708 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816084 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816135 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816224 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816375 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816413 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816481 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816530 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816569 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816598 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817094 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817128 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817188 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817212 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817275 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817298 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817384 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817416 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817448 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.816894 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817082 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.817573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.818465 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.818689 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.818822 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.819102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.819139 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.819170 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.819511 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.819977 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820647 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820650 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820707 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.820964 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821243 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821308 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821320 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821418 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821585 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821634 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.821682 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.822938 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.822997 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823052 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823114 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823200 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823221 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823297 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823321 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823362 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823406 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823478 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823858 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823884 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823909 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824178 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825458 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825447 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825579 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823261 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.822887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823758 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.823810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824031 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824023 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.826828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824179 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824265 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824410 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824611 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.824968 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825078 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825343 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.825647 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:02.32560455 +0000 UTC m=+20.865292214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.825659 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.826743 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.826926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827026 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827059 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827069 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827166 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.826894 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827273 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827287 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827313 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827327 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827418 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827717 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827822 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827847 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827872 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827898 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827975 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828026 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828052 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828108 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828162 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828187 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828273 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828298 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828326 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828355 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828409 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828482 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828503 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828555 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828587 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828655 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828702 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828724 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828744 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828768 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828789 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828832 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828878 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828898 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828920 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828962 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829006 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829048 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829109 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829146 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829180 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829278 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829295 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829313 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829331 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829369 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829385 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829404 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829430 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829449 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829468 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829485 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829542 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829581 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829597 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829614 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829631 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829651 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829682 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829703 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829743 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829761 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829835 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829853 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829871 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829889 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829907 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829924 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829944 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829986 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830003 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830022 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830043 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830078 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830098 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830116 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830179 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhcnr\" (UniqueName: \"kubernetes.io/projected/89b72572-a31b-48f1-93f4-cbfad03736b1-kube-api-access-lhcnr\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830334 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-etc-kubernetes\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830367 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830399 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-kubelet\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc24f\" (UniqueName: \"kubernetes.io/projected/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-kube-api-access-nc24f\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-bin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830514 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-multus\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89b72572-a31b-48f1-93f4-cbfad03736b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830594 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-binary-copy\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830647 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830681 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830700 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cni-binary-copy\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-hosts-file\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830777 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830797 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksmj\" (UniqueName: \"kubernetes.io/projected/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-kube-api-access-bksmj\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-system-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-os-release\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cnibin\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-os-release\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830933 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cnibin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830972 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-conf-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/89b72572-a31b-48f1-93f4-cbfad03736b1-rootfs\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831093 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-857xw\" (UniqueName: \"kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831113 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831149 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-system-cni-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-hostroot\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831204 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv4sr\" (UniqueName: \"kubernetes.io/projected/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-kube-api-access-xv4sr\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831240 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831259 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-multus-certs\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831277 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831308 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-k8s-cni-cncf-io\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831340 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831379 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-socket-dir-parent\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831398 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-netns\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-daemon-config\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89b72572-a31b-48f1-93f4-cbfad03736b1-proxy-tls\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831555 4869 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831583 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831594 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831607 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831617 4869 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831627 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831637 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831650 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831688 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831702 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831712 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831723 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831734 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831744 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831754 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831764 4869 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831775 4869 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831789 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831800 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831811 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831822 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831835 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831846 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831856 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831868 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831879 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831889 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831900 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831912 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831924 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831934 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831945 4869 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831956 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831966 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831976 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831988 4869 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831999 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832009 4869 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832019 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832029 4869 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832039 4869 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832049 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832060 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832070 4869 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832080 4869 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832094 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832106 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832117 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832128 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832138 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832149 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832159 4869 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832171 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832183 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832193 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832204 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832214 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832223 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832234 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.836681 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/89b72572-a31b-48f1-93f4-cbfad03736b1-proxy-tls\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.827880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828165 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.841646 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828245 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828264 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828451 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.828884 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829173 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829415 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829370 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829504 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829643 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.841975 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.829928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830088 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.830958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831008 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831304 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.831819 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832090 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.832715 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.833229 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.833967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.835913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.837104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.837998 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.838233 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.838748 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.840331 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.840761 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.840905 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.841074 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.841187 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.841367 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842052 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842373 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842442 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842583 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842565 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.842693 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.843108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.843248 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.844757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.844836 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-system-cni-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.844868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-hostroot\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.844904 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-system-cni-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845047 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.843277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.843477 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845128 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845187 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845457 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845533 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845790 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-os-release\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-kubelet\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845960 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846143 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846346 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.845917 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cnibin\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-bin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-os-release\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846912 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846909 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847104 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847163 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cnibin\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-var-lib-cni-multus\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847293 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-conf-dir\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847601 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.847643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/89b72572-a31b-48f1-93f4-cbfad03736b1-rootfs\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.850299 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89b72572-a31b-48f1-93f4-cbfad03736b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851095 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851364 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851792 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851892 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.851924 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.852110 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.852133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.852198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-multus-certs\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.852185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.852754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.853131 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.853233 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.853483 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.853926 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.854472 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.854581 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-socket-dir-parent\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857261 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-netns\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857809 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.846721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.857820 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858203 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-multus-daemon-config\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-host-run-k8s-cni-cncf-io\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858390 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858715 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.858979 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-hosts-file\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.859272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.859369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-cni-binary-copy\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.859530 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.860448 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.860888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.860932 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.861495 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.861540 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.862298 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.862900 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863011 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863013 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.862289 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863147 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863327 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863389 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.863328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-etc-kubernetes\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.864002 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.864032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.865687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.866414 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.866412 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.866856 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.866961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.867066 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.866967 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.867539 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.867888 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.868381 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.868456 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.869243 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.872740 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" exitCode=255 Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.872790 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1"} Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.872864 4869 scope.go:117] "RemoveContainer" containerID="cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.874524 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.875773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.875940 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.876138 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.880386 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.880471 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.881047 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.882117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.882130 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.882835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.883371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.884248 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.884973 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.886042 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.888340 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.891489 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.893007 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.893952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhcnr\" (UniqueName: \"kubernetes.io/projected/89b72572-a31b-48f1-93f4-cbfad03736b1-kube-api-access-lhcnr\") pod \"machine-config-daemon-kt9df\" (UID: \"89b72572-a31b-48f1-93f4-cbfad03736b1\") " pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.910635 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.914466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.914763 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc24f\" (UniqueName: \"kubernetes.io/projected/752ad1ae-d5af-4886-84af-a25fd3dd0eb9-kube-api-access-nc24f\") pod \"node-resolver-tlkdn\" (UID: \"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\") " pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.914794 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915123 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915442 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv4sr\" (UniqueName: \"kubernetes.io/projected/e40cdd2b-5d24-4ef5-995a-4e09fc90d33c-kube-api-access-xv4sr\") pod \"multus-68bvk\" (UID: \"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\") " pod="openshift-multus/multus-68bvk" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915483 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915496 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915735 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915877 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915900 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.915933 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.916040 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.916260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-cni-binary-copy\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.918982 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.919608 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.920310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksmj\" (UniqueName: \"kubernetes.io/projected/cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74-kube-api-access-bksmj\") pod \"multus-additional-cni-plugins-4b8g7\" (UID: \"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\") " pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.921033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-857xw\" (UniqueName: \"kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.921312 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.921606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.921922 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.921439 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.922371 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib\") pod \"ovnkube-node-2f9tq\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.922649 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.927132 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.927688 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:00:01 crc kubenswrapper[4869]: E0106 14:00:01.927864 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933458 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933482 4869 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933492 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933502 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933511 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933519 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933528 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933537 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933545 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933555 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933564 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933573 4869 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933585 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933596 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933609 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933621 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933635 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933648 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933659 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933690 4869 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933702 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933713 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933726 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933736 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933747 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933759 4869 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933770 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933781 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933791 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933802 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933812 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933822 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933834 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933845 4869 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933855 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933866 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933875 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933886 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933896 4869 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933907 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933918 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933929 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933941 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933952 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933963 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933973 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933984 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.933997 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934011 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934021 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934033 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934044 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934055 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934066 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934077 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934087 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934098 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934111 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934123 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934136 4869 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934148 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934159 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934170 4869 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934182 4869 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934190 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934200 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934209 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934219 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934228 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934236 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934245 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934253 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934262 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934270 4869 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934278 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934287 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934299 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934308 4869 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934316 4869 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934325 4869 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934333 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934342 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934352 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934360 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934370 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934379 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934388 4869 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934397 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934405 4869 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934415 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934424 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934434 4869 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934445 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934454 4869 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934462 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934471 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934480 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934488 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934497 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934505 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934513 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934522 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934530 4869 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934538 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934547 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934556 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934564 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934572 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934581 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934589 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934598 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934606 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934614 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934623 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934631 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934640 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934648 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934657 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934695 4869 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934705 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934715 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934724 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934732 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934742 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934751 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934760 4869 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934768 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934776 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934785 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934795 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.934819 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.936562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.947957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.959386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.964332 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.973320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.981405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.989458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlkdn" Jan 06 14:00:01 crc kubenswrapper[4869]: I0106 14:00:01.991953 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T13:59:59Z\\\",\\\"message\\\":\\\":]:17697\\\\nI0106 13:59:59.707309 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0106 13:59:59.707356 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0106 13:59:59.707390 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0106 13:59:59.707438 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3497218274/tls.crt::/tmp/serving-cert-3497218274/tls.key\\\\\\\"\\\\nI0106 13:59:59.707544 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0106 13:59:59.707951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0106 13:59:59.707985 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0106 13:59:59.708331 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0106 13:59:59.708351 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0106 13:59:59.708373 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0106 13:59:59.708386 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0106 13:59:59.711738 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0106 13:59:59.712236 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0106 13:59:59.712724 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0106 13:59:59.734712 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.003822 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.006335 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-a0a45edd01dfeb6f2d81238d7ba37a95c8522204964024cd94729d2c67622575 WatchSource:0}: Error finding container a0a45edd01dfeb6f2d81238d7ba37a95c8522204964024cd94729d2c67622575: Status 404 returned error can't find the container with id a0a45edd01dfeb6f2d81238d7ba37a95c8522204964024cd94729d2c67622575 Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.007167 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.022920 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.026995 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.030947 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.034260 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.038176 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.038926 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.039192 4869 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.039251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.043804 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.043841 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.049910 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.052650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.064002 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.064321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-68bvk" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.065044 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.068478 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.068904 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89b72572_a31b_48f1_93f4_cbfad03736b1.slice/crio-07a3c91430322644e7eae708289d2f3e12fd9e46289221597d924c06ec8685af WatchSource:0}: Error finding container 07a3c91430322644e7eae708289d2f3e12fd9e46289221597d924c06ec8685af: Status 404 returned error can't find the container with id 07a3c91430322644e7eae708289d2f3e12fd9e46289221597d924c06ec8685af Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.072098 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.082815 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.096359 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.110055 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.122305 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.137013 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.144495 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.144534 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.148310 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.177063 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487c527a_7d89_4175_8827_c8cdd6e0211f.slice/crio-1b6ec9d9e6372d1dd9a0588bf75844df07980546f4a55993ea1440b7d39cd0cd WatchSource:0}: Error finding container 1b6ec9d9e6372d1dd9a0588bf75844df07980546f4a55993ea1440b7d39cd0cd: Status 404 returned error can't find the container with id 1b6ec9d9e6372d1dd9a0588bf75844df07980546f4a55993ea1440b7d39cd0cd Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.185601 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcca4d7e4_e530_4ffc_a1a3_5f5b7c758d74.slice/crio-6368e5f56ece19f5cd319900e5832c93166f2b1dc9c2bc6efc0d65368d341dfb WatchSource:0}: Error finding container 6368e5f56ece19f5cd319900e5832c93166f2b1dc9c2bc6efc0d65368d341dfb: Status 404 returned error can't find the container with id 6368e5f56ece19f5cd319900e5832c93166f2b1dc9c2bc6efc0d65368d341dfb Jan 06 14:00:02 crc kubenswrapper[4869]: W0106 14:00:02.190173 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode40cdd2b_5d24_4ef5_995a_4e09fc90d33c.slice/crio-76dda9c7d680070dd0485acc85c46e8d97688930a36fb522c109f3cd38303e71 WatchSource:0}: Error finding container 76dda9c7d680070dd0485acc85c46e8d97688930a36fb522c109f3cd38303e71: Status 404 returned error can't find the container with id 76dda9c7d680070dd0485acc85c46e8d97688930a36fb522c109f3cd38303e71 Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.244984 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.245033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245177 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.245191 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245203 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245218 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.245247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245331 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245360 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245374 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:03.245247038 +0000 UTC m=+21.784934702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245411 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:03.245403531 +0000 UTC m=+21.785091195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245449 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:03.245416561 +0000 UTC m=+21.785104225 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245494 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245544 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245568 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.245655 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:03.245629016 +0000 UTC m=+21.785316860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.345686 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.345921 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:03.345903278 +0000 UTC m=+21.885590942 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.803027 4869 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.805379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.805429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.805440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.805560 4869 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.817680 4869 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.818222 4869 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.819551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.819591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.819602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.819620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.819631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.836146 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.842203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.842261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.842274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.842297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.842308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.859021 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.863655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.863713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.863724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.863743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.863755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.878685 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.878744 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.878759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"07a3c91430322644e7eae708289d2f3e12fd9e46289221597d924c06ec8685af"} Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.880382 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.880843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.880916 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.880937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a0a45edd01dfeb6f2d81238d7ba37a95c8522204964024cd94729d2c67622575"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.882623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"48450addf4c3d56da286bc176690c057208dde83ee951efbf2b9a600ea4dd80a"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.885164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.885209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.885221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.885242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.885254 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.886482 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.889280 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.889480 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.890035 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerStarted","Data":"0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.890079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerStarted","Data":"6368e5f56ece19f5cd319900e5832c93166f2b1dc9c2bc6efc0d65368d341dfb"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.892590 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" exitCode=0 Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.892679 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.892702 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"1b6ec9d9e6372d1dd9a0588bf75844df07980546f4a55993ea1440b7d39cd0cd"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.895308 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlkdn" event={"ID":"752ad1ae-d5af-4886-84af-a25fd3dd0eb9","Type":"ContainerStarted","Data":"6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.895332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlkdn" event={"ID":"752ad1ae-d5af-4886-84af-a25fd3dd0eb9","Type":"ContainerStarted","Data":"bf469559740d3cb9ceb618a72d59c6486eab63a9eecf65ab409de85c83914096"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.897974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.898016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5e13dc25b188675aa03d3c8cd7ee97201376d6285d596d07077ca493cfa977ea"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.899147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.900227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerStarted","Data":"7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.900273 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerStarted","Data":"76dda9c7d680070dd0485acc85c46e8d97688930a36fb522c109f3cd38303e71"} Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.907891 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.915208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.915272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.915287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.915314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.915337 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.922304 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.944852 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.944879 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: E0106 14:00:02.945001 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.950245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.950277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.950285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.950301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.950311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:02Z","lastTransitionTime":"2026-01-06T14:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.959580 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.973344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdc6374f0881a3f15f2d74550750599ecf2aeb8da039cd5c5ac546e67dc0edcd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T13:59:59Z\\\",\\\"message\\\":\\\":]:17697\\\\nI0106 13:59:59.707309 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0106 13:59:59.707356 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0106 13:59:59.707390 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0106 13:59:59.707438 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3497218274/tls.crt::/tmp/serving-cert-3497218274/tls.key\\\\\\\"\\\\nI0106 13:59:59.707544 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI0106 13:59:59.707951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0106 13:59:59.707985 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0106 13:59:59.708331 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0106 13:59:59.708351 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0106 13:59:59.708373 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0106 13:59:59.708386 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0106 13:59:59.711738 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0106 13:59:59.712236 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI0106 13:59:59.712724 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF0106 13:59:59.734712 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.985999 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:02 crc kubenswrapper[4869]: I0106 14:00:02.999531 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.012256 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.026975 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.037896 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.053702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.053758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.053771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.053793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.053807 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.062381 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.078236 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.092253 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.106798 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.120827 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.134861 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.147859 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.156578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.156617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.156624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.156637 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.156647 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.161112 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.175034 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.188080 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.201023 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.215064 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259584 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.259951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.260005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.260027 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.260054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260135 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260178 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:05.260163051 +0000 UTC m=+23.799850715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260726 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260760 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:05.260752654 +0000 UTC m=+23.800440318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260814 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260827 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260838 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.260858 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:05.260852726 +0000 UTC m=+23.800540390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.261057 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.261106 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.261129 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.261223 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:05.261190054 +0000 UTC m=+23.800877898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.273773 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.326887 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.360707 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.360810 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:05.36077993 +0000 UTC m=+23.900467594 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.362479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.362527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.362541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.362559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.362571 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.470584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.470635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.470646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.470661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.470688 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.574642 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.574716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.574727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.574747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.574760 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.683316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.683366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.683377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.683398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.683411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.704179 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.704814 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.705273 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.705335 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.705503 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:03 crc kubenswrapper[4869]: E0106 14:00:03.705348 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.715723 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.716552 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.717759 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.718459 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.719619 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.720302 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.721001 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.722130 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.722805 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.723759 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.724337 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.725485 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.726058 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.726587 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.727566 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.728130 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.729079 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.729500 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.730069 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.731167 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.731642 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.732681 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.733153 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.734233 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.734841 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.735557 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.736785 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.737288 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.738392 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.738964 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.739943 4869 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.740082 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.741854 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.743037 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.743527 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.745083 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.745760 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.746722 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.747385 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.748481 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.749033 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.750044 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.751355 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.752122 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.753043 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.753790 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.754766 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.755622 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.756773 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.757284 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.757812 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.758769 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.759327 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.760242 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.788329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.788368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.788377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.788393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.788403 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.898489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.898528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.898536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.898552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.898564 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:03Z","lastTransitionTime":"2026-01-06T14:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.904630 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325" exitCode=0 Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.904723 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.915436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.915502 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.915512 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.922688 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.939046 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.956992 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:03 crc kubenswrapper[4869]: I0106 14:00:03.973676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:03Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.002647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.002708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.002721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.002741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.002755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.003200 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.022037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.042124 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.055649 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.068506 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.085265 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.103196 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.109356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.109520 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.109618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.109736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.109846 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.130949 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.213160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.213192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.213201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.213215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.213226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.316034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.316088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.316102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.316124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.316144 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.419607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.420089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.420100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.420120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.420135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.523204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.523237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.523245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.523262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.523271 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.626686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.626720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.626731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.626749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.626762 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.729850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.729896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.729906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.729924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.729935 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.832838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.832896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.832905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.832921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.832933 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.925038 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85" exitCode=0 Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.925119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.931882 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.931960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.931984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.935519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.935582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.935604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.935634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.935654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:04Z","lastTransitionTime":"2026-01-06T14:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.947003 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.973306 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:04 crc kubenswrapper[4869]: I0106 14:00:04.990363 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.008939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.025586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.039732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.039774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.039782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.039801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.039813 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.041935 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.058654 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.075533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.090613 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.107452 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.122208 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.135325 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.142609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.142658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.142711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.142735 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.142750 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.248940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.249509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.249583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.249657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.249742 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.288199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.288261 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.288297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.288328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288538 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288562 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288578 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288604 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288646 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:09.288624116 +0000 UTC m=+27.828311800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.288703 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:09.288677997 +0000 UTC m=+27.828365661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289018 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289092 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289119 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289250 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:09.289217879 +0000 UTC m=+27.828905593 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289732 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.289788 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:09.289777121 +0000 UTC m=+27.829464775 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.352320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.352354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.352363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.352378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.352387 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.389191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.389897 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:09.389863859 +0000 UTC m=+27.929551523 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.455644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.455717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.455731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.455754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.455769 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.558432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.558531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.558569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.558605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.558631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.661912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.661965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.661976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.661995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.662010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.703616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.703761 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.703650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.703838 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.703998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:05 crc kubenswrapper[4869]: E0106 14:00:05.704142 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.766004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.766078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.766102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.766132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.766156 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.869806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.869876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.869889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.869909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.869922 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.937782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.941164 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db" exitCode=0 Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.941209 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.953993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.974073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.974125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.974137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.974157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.974177 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:05Z","lastTransitionTime":"2026-01-06T14:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.979519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:05 crc kubenswrapper[4869]: I0106 14:00:05.994562 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:05Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.030140 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.082055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.082105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.082115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.082131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.082159 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.094804 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.115180 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.130379 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.145432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.160375 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.170873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.184457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.184497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.184508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.184524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.184535 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.193392 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.213416 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.232980 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.251559 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.267204 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.283595 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.287885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.287939 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.287955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.287972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.287985 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.302401 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.323974 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.338569 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.354548 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.370877 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.388482 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.390692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.390729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.390741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.390760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.390771 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.405277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.418633 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.493905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.493956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.493970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.493992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.494006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.596469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.596515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.596524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.596539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.596550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.699301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.699703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.699714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.699734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.699747 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.803261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.803543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.803564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.803591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.803610 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.907655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.907704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.907713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.907726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.907736 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:06Z","lastTransitionTime":"2026-01-06T14:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.948613 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2" exitCode=0 Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.948744 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.960904 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.965762 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:06 crc kubenswrapper[4869]: I0106 14:00:06.986860 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:06Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010797 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010656 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.010831 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.016121 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.017036 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:00:07 crc kubenswrapper[4869]: E0106 14:00:07.017297 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.026143 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.038354 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.051066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.070292 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.084766 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.100626 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.114199 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.117525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.117566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.117576 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.117596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.117606 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.133912 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.150600 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.219985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.220080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.220113 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.220174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.220196 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.335575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.335616 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.335624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.335641 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.335652 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.438806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.438858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.438867 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.438882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.438894 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.543390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.543469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.543486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.543515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.543537 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.646267 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.646318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.646329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.646347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.646377 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.704886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:07 crc kubenswrapper[4869]: E0106 14:00:07.705028 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.705437 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:07 crc kubenswrapper[4869]: E0106 14:00:07.705507 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.705586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:07 crc kubenswrapper[4869]: E0106 14:00:07.705651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.748543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.748578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.748587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.748600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.748609 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.850851 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.850890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.850899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.850914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.850924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.953009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.953041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.953051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.953064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.953073 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:07Z","lastTransitionTime":"2026-01-06T14:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.965714 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerStarted","Data":"644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb"} Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.983894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:07 crc kubenswrapper[4869]: I0106 14:00:07.999074 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:07Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.014273 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.029162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.040380 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.053077 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.055015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.055042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.055051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.055066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.055077 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.065264 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.086079 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.100866 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.114331 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.131165 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.145806 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.158131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.158188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.158201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.158229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.158245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.261510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.261555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.261564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.261596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.261609 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.294217 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-vjd79"] Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.294657 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.297512 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.297554 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.297887 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.298061 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.309946 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.332031 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.346432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.363513 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.364570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.364597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.364611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.364630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.364653 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.378462 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.394274 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.406344 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.420365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.422789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be5e99e3-237b-417d-b5b1-95187549c6ca-host\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.422940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be5e99e3-237b-417d-b5b1-95187549c6ca-serviceca\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.423059 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdc4j\" (UniqueName: \"kubernetes.io/projected/be5e99e3-237b-417d-b5b1-95187549c6ca-kube-api-access-tdc4j\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.432869 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.448171 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.462296 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.470603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.470734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.470753 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.470772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.470785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.482837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.495888 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.524307 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be5e99e3-237b-417d-b5b1-95187549c6ca-host\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.524369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be5e99e3-237b-417d-b5b1-95187549c6ca-serviceca\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.524392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdc4j\" (UniqueName: \"kubernetes.io/projected/be5e99e3-237b-417d-b5b1-95187549c6ca-kube-api-access-tdc4j\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.524824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be5e99e3-237b-417d-b5b1-95187549c6ca-host\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.525962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/be5e99e3-237b-417d-b5b1-95187549c6ca-serviceca\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.547970 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdc4j\" (UniqueName: \"kubernetes.io/projected/be5e99e3-237b-417d-b5b1-95187549c6ca-kube-api-access-tdc4j\") pod \"node-ca-vjd79\" (UID: \"be5e99e3-237b-417d-b5b1-95187549c6ca\") " pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.574426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.574882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.574950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.575050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.575113 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.607312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vjd79" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.678871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.678924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.678937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.678955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.678967 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.687228 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.783771 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.783813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.783825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.783843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.783856 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.887078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.887127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.887138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.887157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.887168 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.971597 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vjd79" event={"ID":"be5e99e3-237b-417d-b5b1-95187549c6ca","Type":"ContainerStarted","Data":"9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.971666 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vjd79" event={"ID":"be5e99e3-237b-417d-b5b1-95187549c6ca","Type":"ContainerStarted","Data":"79a48885ba0d56ae9954d69da644878a359b65cd2e1e3d5ddb6991542a16c328"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.976943 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb" exitCode=0 Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.977030 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.983429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.983781 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.990353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.990398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.990471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.990489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.990505 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:08Z","lastTransitionTime":"2026-01-06T14:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:08 crc kubenswrapper[4869]: I0106 14:00:08.993400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:08Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.013106 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.024439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.029770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.041821 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.056348 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.068909 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.081588 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.091976 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.092913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.092951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.094974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.094998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.095015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.097610 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.098447 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.098640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.111576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.124843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.141592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.155306 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.170549 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.187917 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.199273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.199318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.199330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.199347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.199359 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.203867 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.221314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.235676 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.248008 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.258745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.271133 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.289751 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.308785 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.311575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.311653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.311703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.311734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.311755 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.325230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.330897 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.330947 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.330977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.331003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331085 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331118 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331135 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331127 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331187 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331137 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331216 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331190 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:17.33117415 +0000 UTC m=+35.870861814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331248 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:17.331232552 +0000 UTC m=+35.870920216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331224 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331262 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:17.331255612 +0000 UTC m=+35.870943266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.331315 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:17.331288543 +0000 UTC m=+35.870976207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.340085 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.353724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.367873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:09Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.415485 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.415546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.415560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.415581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.415595 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.431403 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.431679 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:17.431657237 +0000 UTC m=+35.971344901 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.518863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.519594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.519679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.519779 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.519838 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.622372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.622602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.622730 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.622801 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.622879 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.703739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.704143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.703874 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.704426 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.703790 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:09 crc kubenswrapper[4869]: E0106 14:00:09.704650 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.725539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.725572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.725614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.725634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.725644 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.828388 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.828428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.828439 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.828457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.828470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.931134 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.931187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.931195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.931208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.931219 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:09Z","lastTransitionTime":"2026-01-06T14:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.989862 4869 generic.go:334] "Generic (PLEG): container finished" podID="cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74" containerID="5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73" exitCode=0 Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.989940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerDied","Data":"5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73"} Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.989984 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 06 14:00:09 crc kubenswrapper[4869]: I0106 14:00:09.990408 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.005793 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.022638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.025133 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.034413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.034452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.034464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.034482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.034493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.040980 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.054939 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.067098 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.087173 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.107191 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.119905 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.137826 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.152770 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.167330 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.183980 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.196839 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.217837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.232943 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.239347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.239376 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.239387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.239403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.239413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.244649 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.257891 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.276961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.290840 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.305803 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.320149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.339705 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.341474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.341526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.341539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.341557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.341570 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.359669 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.371834 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.382045 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.396098 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:10Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.443930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.443987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.443999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.444016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.444029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.546842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.546888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.546902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.546917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.546928 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.649510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.649602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.649628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.649668 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.649733 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.753119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.753183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.753194 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.753217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.753228 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.856965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.857047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.857070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.857102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.857125 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.960597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.960658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.960711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.960739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.960788 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:10Z","lastTransitionTime":"2026-01-06T14:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:10 crc kubenswrapper[4869]: I0106 14:00:10.999606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" event={"ID":"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74","Type":"ContainerStarted","Data":"3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:10.999725 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.016533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.035093 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.056162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.064148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.064218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.064235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.064257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.064273 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.073132 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.096613 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.113221 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.136021 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.158467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.167322 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.167538 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.167623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.167710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.167784 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.173612 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.186873 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.200075 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.269803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.270116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.270213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.270301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.270374 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.271487 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.310054 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.379220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.379269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.379281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.379300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.379314 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.482841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.482898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.482912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.482943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.482961 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.585119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.585412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.585591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.585746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.585926 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.688477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.688521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.688535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.688552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.688565 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.703788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.703884 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:11 crc kubenswrapper[4869]: E0106 14:00:11.703929 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.703942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:11 crc kubenswrapper[4869]: E0106 14:00:11.704028 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:11 crc kubenswrapper[4869]: E0106 14:00:11.704134 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.719768 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.733586 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.750505 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.770616 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.785242 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.792321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.792353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.792379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.792395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.792405 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.800152 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.810878 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.831237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.846068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.859971 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.876584 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.890099 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.894306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.894478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.894567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.894629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.894707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.904379 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:11Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.996921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.996958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.996967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.996983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:11 crc kubenswrapper[4869]: I0106 14:00:11.997008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:11Z","lastTransitionTime":"2026-01-06T14:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.002355 4869 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.100011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.100073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.100086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.100106 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.100120 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.203079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.203136 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.203150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.203169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.203182 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.306385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.306436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.306447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.306463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.306474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.409074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.409114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.409123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.409140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.409152 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.512109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.512149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.512157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.512172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.512181 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.614545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.614587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.614601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.614619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.614631 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.717952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.718011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.718021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.718037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.718048 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.800665 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs"] Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.801268 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.804039 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.804199 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.820351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.820612 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.820703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.820798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.820871 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.823596 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.838836 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.853538 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.866232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.866290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.866313 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wdn\" (UniqueName: \"kubernetes.io/projected/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-kube-api-access-l8wdn\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.866398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.868346 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.893148 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.909318 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.923908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.923949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.923957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.923974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.923987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:12Z","lastTransitionTime":"2026-01-06T14:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.927147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.936539 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.940701 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.954030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.967656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.967969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.968077 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8wdn\" (UniqueName: \"kubernetes.io/projected/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-kube-api-access-l8wdn\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.968512 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.968558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-env-overrides\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.968731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.970041 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.975068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.987223 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8wdn\" (UniqueName: \"kubernetes.io/projected/2940a7ac-7d7a-4b21-805d-a6d2afa4a3af-kube-api-access-l8wdn\") pod \"ovnkube-control-plane-749d76644c-64qxs\" (UID: \"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:12 crc kubenswrapper[4869]: I0106 14:00:12.989543 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:12Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.005195 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.005716 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/0.log" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.008563 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226" exitCode=1 Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.008652 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.009321 4869 scope.go:117] "RemoveContainer" containerID="29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.023837 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.027058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.027115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.027131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.027154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.027167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.042576 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.054769 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.070936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.070986 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.070999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.071017 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.071032 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.071000 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.084393 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.085232 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.093424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.093481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.093494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.093514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.093527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.103400 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.106335 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.111116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.111158 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.111171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.111193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.111207 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.116192 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.117343 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.121651 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.129610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.129697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.129717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.129746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.129768 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.137381 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:12.147218 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:12.147227 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:12.147259 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:12.148448 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:12.148469 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:12.148506 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0106 14:00:12.148507 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:12.148565 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:12.148573 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:12.148556 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0106 14:00:12.148634 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0106 14:00:12.148671 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0106 14:00:12.148691 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:12.148767 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:12.148798 6107 factory.go:656] Stopping watch factory\\\\nI0106 14:00:12.148832 6107 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.144635 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.149183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.149232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.149243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.149263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.149274 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.154637 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.167287 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.167581 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.167469 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.170525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.170563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.170574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.170593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.170605 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.182164 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.199842 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.215330 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.228492 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.242182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.255828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:13Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.273072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.273135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.273150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.273174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.273190 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.376033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.376078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.376087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.376107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.376117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.480190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.480245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.480258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.480276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.480289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.583277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.583311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.583320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.583336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.583348 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.685806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.685857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.685866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.685886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.685897 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.703634 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.703806 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.703901 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.704029 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.704139 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:13 crc kubenswrapper[4869]: E0106 14:00:13.704346 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.788740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.788786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.788796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.788817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.788827 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.892813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.892850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.892859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.892875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.892885 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.994897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.994945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.994954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.994969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:13 crc kubenswrapper[4869]: I0106 14:00:13.994979 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:13Z","lastTransitionTime":"2026-01-06T14:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.013159 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/0.log" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.015491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.016842 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" event={"ID":"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af","Type":"ContainerStarted","Data":"135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.016891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" event={"ID":"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af","Type":"ContainerStarted","Data":"572e161cd93808d72bb4756c10bd17313f5b0ba93e89bc1aebb5b3c65d5274f6"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.097868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.097907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.097915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.097931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.097943 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.200869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.200906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.200914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.200927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.200936 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.303594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.303630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.303643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.303658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.303675 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.406793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.406835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.406846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.406861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.406876 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.509545 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.509581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.509591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.509607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.509619 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.613070 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.613101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.613110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.613126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.613135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.686733 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mmdq4"] Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.687202 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: E0106 14:00:14.687269 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.709120 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:12.147218 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:12.147227 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:12.147259 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:12.148448 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:12.148469 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:12.148506 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0106 14:00:12.148507 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:12.148565 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:12.148573 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:12.148556 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0106 14:00:12.148634 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0106 14:00:12.148671 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0106 14:00:12.148691 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:12.148767 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:12.148798 6107 factory.go:656] Stopping watch factory\\\\nI0106 14:00:12.148832 6107 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.715237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.715560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.715709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.715835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.715974 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.726547 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.738450 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.751807 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.765030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.777317 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.788457 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cndw2\" (UniqueName: \"kubernetes.io/projected/b86d961d-74c0-40cb-912d-ae0db79d97f2-kube-api-access-cndw2\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.788527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.790687 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.803394 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.818788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.819093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.819222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.819332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.819429 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.820758 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.836177 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.849738 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.860516 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.872534 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.885243 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.889956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cndw2\" (UniqueName: \"kubernetes.io/projected/b86d961d-74c0-40cb-912d-ae0db79d97f2-kube-api-access-cndw2\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.890043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: E0106 14:00:14.890231 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:14 crc kubenswrapper[4869]: E0106 14:00:14.890334 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:15.390306323 +0000 UTC m=+33.929994027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.898293 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:14Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.910165 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cndw2\" (UniqueName: \"kubernetes.io/projected/b86d961d-74c0-40cb-912d-ae0db79d97f2-kube-api-access-cndw2\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.921777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.921816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.921827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.921846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:14 crc kubenswrapper[4869]: I0106 14:00:14.921856 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:14Z","lastTransitionTime":"2026-01-06T14:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.022161 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/1.log" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.023065 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/0.log" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.024789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.024834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.024848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.024873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.024884 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.027251 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce" exitCode=1 Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.027327 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.027370 4869 scope.go:117] "RemoveContainer" containerID="29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.028626 4869 scope.go:117] "RemoveContainer" containerID="5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce" Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.028998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.031411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" event={"ID":"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af","Type":"ContainerStarted","Data":"a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.044646 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.058351 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.069099 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.084414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.094073 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.112742 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:12.147218 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:12.147227 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:12.147259 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:12.148448 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:12.148469 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:12.148506 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0106 14:00:12.148507 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:12.148565 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:12.148573 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:12.148556 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0106 14:00:12.148634 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0106 14:00:12.148671 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0106 14:00:12.148691 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:12.148767 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:12.148798 6107 factory.go:656] Stopping watch factory\\\\nI0106 14:00:12.148832 6107 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.126669 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.128090 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.128118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.128128 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.128144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.128154 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.140075 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.153944 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.168030 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.179414 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.198858 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.215698 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.230147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.230181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.230192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.230209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.230222 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.235870 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.247898 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.261631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.273691 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.286869 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.301181 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.315618 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.326750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.332944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.332982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.332992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.333009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.333021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.340574 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.352723 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.362418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.372237 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.385352 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.395376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.395550 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.395625 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:16.395606141 +0000 UTC m=+34.935293815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.397739 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.415320 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.425581 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.435876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.435938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.435952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.435971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.435984 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.448756 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29dd366bf82599fe9433146c4881f57556d852314c89ea1747ea34dd97491226\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:12.147218 6107 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:12.147227 6107 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:12.147259 6107 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:12.148448 6107 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:12.148469 6107 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:12.148506 6107 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0106 14:00:12.148507 6107 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:12.148565 6107 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:12.148573 6107 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:12.148556 6107 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0106 14:00:12.148634 6107 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0106 14:00:12.148671 6107 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0106 14:00:12.148691 6107 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:12.148767 6107 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:12.148798 6107 factory.go:656] Stopping watch factory\\\\nI0106 14:00:12.148832 6107 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:15Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.539005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.539053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.539062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.539077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.539092 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.641527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.641577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.641587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.641603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.641615 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.704351 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.704471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.704505 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.704791 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.704939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:15 crc kubenswrapper[4869]: E0106 14:00:15.705093 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.745060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.745099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.745107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.745124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.745135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.847600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.847644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.847655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.847697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.847710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.950789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.950843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.950855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.950874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:15 crc kubenswrapper[4869]: I0106 14:00:15.950887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:15Z","lastTransitionTime":"2026-01-06T14:00:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.037922 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/1.log" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.042135 4869 scope.go:117] "RemoveContainer" containerID="5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce" Jan 06 14:00:16 crc kubenswrapper[4869]: E0106 14:00:16.042391 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.053614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.053666 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.053700 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.053720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.053731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.055719 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.071035 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.084337 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.096971 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.109167 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.132430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.147314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.155860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.156059 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.156161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.156248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.156323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.162771 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.175474 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.187623 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.198627 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.213485 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.225587 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.239826 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.251182 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:16Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.258954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.258991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.259003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.259022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.259034 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.362165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.362202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.362210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.362226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.362235 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.402973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:16 crc kubenswrapper[4869]: E0106 14:00:16.403115 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:16 crc kubenswrapper[4869]: E0106 14:00:16.403354 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:18.403334271 +0000 UTC m=+36.943021935 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.465028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.465076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.465092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.465115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.465132 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.567814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.567846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.567855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.567868 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.567877 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.670486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.670534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.670544 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.670560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.670570 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.703294 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:16 crc kubenswrapper[4869]: E0106 14:00:16.703449 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.773605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.773649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.773663 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.773697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.773708 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.876238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.876308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.876328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.876359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.876385 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.979675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.979714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.979722 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.979759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:16 crc kubenswrapper[4869]: I0106 14:00:16.979770 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:16Z","lastTransitionTime":"2026-01-06T14:00:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.083229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.084078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.084130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.084155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.084170 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.187937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.187981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.187990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.188005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.188017 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.291124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.291160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.291168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.291183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.291195 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.393936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.393979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.393987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.394003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.394016 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.414416 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.414485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.414517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.414550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414584 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414684 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:33.414640582 +0000 UTC m=+51.954328246 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414716 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414734 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414729 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414824 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:33.414796126 +0000 UTC m=+51.954483810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414749 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414859 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414903 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414917 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414975 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:33.414947609 +0000 UTC m=+51.954635273 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.414994 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:33.41498677 +0000 UTC m=+51.954674434 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.496820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.496863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.496872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.496887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.496898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.515386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.515530 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:00:33.515509277 +0000 UTC m=+52.055196941 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.600172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.600209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.600218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.600233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.600245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703326 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703462 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.703524 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.703778 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.703894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:17 crc kubenswrapper[4869]: E0106 14:00:17.704036 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.806304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.806367 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.806384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.806411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.806432 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.910156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.910222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.910236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.910258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:17 crc kubenswrapper[4869]: I0106 14:00:17.910306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:17Z","lastTransitionTime":"2026-01-06T14:00:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.013116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.013366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.013406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.013442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.013469 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.118275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.118391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.118415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.118445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.118472 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.221120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.221205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.221223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.221271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.221284 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.324795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.325050 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.325182 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.325356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.325489 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.427087 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:18 crc kubenswrapper[4869]: E0106 14:00:18.427351 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:18 crc kubenswrapper[4869]: E0106 14:00:18.427460 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:22.427429878 +0000 UTC m=+40.967117572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.429394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.429447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.429465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.429494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.429512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.533315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.533366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.533377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.533396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.533408 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.636741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.636805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.636822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.636846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.636863 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.703778 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:18 crc kubenswrapper[4869]: E0106 14:00:18.704053 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.740974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.741024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.741033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.741053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.741064 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.844272 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.844329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.844342 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.844362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.844378 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.947183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.947225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.947234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.947250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:18 crc kubenswrapper[4869]: I0106 14:00:18.947261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:18Z","lastTransitionTime":"2026-01-06T14:00:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.050249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.050294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.050303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.050320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.050331 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.152615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.152652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.152686 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.152705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.152717 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.255410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.255458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.255468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.255484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.255494 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.358143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.358197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.358228 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.358274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.358288 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.461080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.461116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.461126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.461138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.461147 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.563869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.563922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.563933 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.563950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.563962 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.667573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.667635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.667648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.667711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.667731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.703654 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:19 crc kubenswrapper[4869]: E0106 14:00:19.703790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.703858 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.703910 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:19 crc kubenswrapper[4869]: E0106 14:00:19.704042 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:19 crc kubenswrapper[4869]: E0106 14:00:19.704156 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.770197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.770494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.770592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.770699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.770785 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.873313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.873368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.873382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.873399 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.873410 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.976219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.976497 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.976661 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.976855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:19 crc kubenswrapper[4869]: I0106 14:00:19.976995 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:19Z","lastTransitionTime":"2026-01-06T14:00:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.079181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.079221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.079230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.079244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.079255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.182160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.182206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.182216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.182233 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.182242 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.285163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.285212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.285223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.285239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.285250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.387675 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.387720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.387733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.387755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.387767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.490093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.490140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.490156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.490174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.490203 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.593245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.593283 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.593294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.593313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.593325 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.695625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.695708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.695734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.695765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.695789 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.704102 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:20 crc kubenswrapper[4869]: E0106 14:00:20.704236 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.797953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.798038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.798056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.798081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.798100 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.900788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.900840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.900855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.900874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:20 crc kubenswrapper[4869]: I0106 14:00:20.900885 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:20Z","lastTransitionTime":"2026-01-06T14:00:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.003157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.003197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.003208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.003224 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.003236 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.105426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.105482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.105496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.105514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.105527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.207866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.207903 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.207916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.207934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.207946 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.311387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.311455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.311478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.311510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.311535 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.415015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.415074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.415087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.415108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.415121 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.518536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.518605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.518619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.518645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.518664 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.622236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.622299 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.622315 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.622340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.622358 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.704138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:21 crc kubenswrapper[4869]: E0106 14:00:21.704343 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.704405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:21 crc kubenswrapper[4869]: E0106 14:00:21.704546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.704620 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:21 crc kubenswrapper[4869]: E0106 14:00:21.704705 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.720894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.725421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.725486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.725496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.725515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.725532 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.746131 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.767162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.791973 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.813925 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.829243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.829302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.829320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.829347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.829366 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.833842 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.850900 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.867876 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.887745 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.904149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.917114 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.931775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.931817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.931841 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.931858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.931870 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:21Z","lastTransitionTime":"2026-01-06T14:00:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.933407 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.951813 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.965162 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:21 crc kubenswrapper[4869]: I0106 14:00:21.983040 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:21Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.035003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.035097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.035115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.035143 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.035162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.138069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.138154 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.138173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.138205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.138226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.241044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.241114 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.241132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.241161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.241180 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.345259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.345316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.345327 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.345347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.345360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.448064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.448130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.448150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.448181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.448202 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.471835 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:22 crc kubenswrapper[4869]: E0106 14:00:22.472075 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:22 crc kubenswrapper[4869]: E0106 14:00:22.472211 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:30.472173441 +0000 UTC m=+49.011861135 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.550879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.550935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.550947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.550965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.550977 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.653930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.653985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.653997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.654027 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.654045 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.704137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:22 crc kubenswrapper[4869]: E0106 14:00:22.704374 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.757387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.757427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.757438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.757455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.757468 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.860658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.860746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.860762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.860781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.860794 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.963861 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.963899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.963910 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.963926 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:22 crc kubenswrapper[4869]: I0106 14:00:22.963936 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:22Z","lastTransitionTime":"2026-01-06T14:00:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.067147 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.067517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.067645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.067904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.068035 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.171891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.171944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.171956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.171977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.171989 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.275211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.275248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.275259 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.275275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.275286 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.312063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.312156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.312186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.312217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.312241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.339470 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:23Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.346586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.346710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.346731 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.346760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.346780 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.370358 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:23Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.376277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.376347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.376368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.376394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.376412 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.395885 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:23Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.402548 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.402607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.402630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.402707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.402752 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.426890 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:23Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.432004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.432048 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.432062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.432083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.432103 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.455945 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:23Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.456175 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.459298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.459366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.459386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.459410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.459428 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.563823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.563907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.563932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.563962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.563984 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.668942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.669007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.669026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.669045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.669059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.704359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.704359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.704652 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.704714 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.704873 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:23 crc kubenswrapper[4869]: E0106 14:00:23.705025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.707348 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.772163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.772240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.772257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.772286 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.772304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.875386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.875461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.875487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.875528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.875553 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.978436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.978498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.978513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.978535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:23 crc kubenswrapper[4869]: I0106 14:00:23.978550 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:23Z","lastTransitionTime":"2026-01-06T14:00:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.073119 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.075166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.075572 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.084577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.084619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.084630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.084646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.084657 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.095836 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.108741 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.125365 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.141959 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.153733 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.181153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.187632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.187694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.187706 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.187727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.187741 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.198207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.213710 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.230197 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.245503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.259433 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.273230 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.283193 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.293878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.294197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.294305 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.294372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.294438 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.309970 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.329872 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:24Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.397353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.397401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.397414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.397436 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.397449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.500831 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.500893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.500907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.500931 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.500949 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.604647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.604761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.604787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.604817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.604840 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.704219 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:24 crc kubenswrapper[4869]: E0106 14:00:24.704529 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.707791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.707860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.707885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.707914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.707936 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.810510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.810552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.810563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.810580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.810593 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.913794 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.913847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.913857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.913872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:24 crc kubenswrapper[4869]: I0106 14:00:24.913882 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:24Z","lastTransitionTime":"2026-01-06T14:00:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.016689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.016726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.016737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.016756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.016770 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.119112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.119148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.119156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.119171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.119181 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.221183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.221227 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.221237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.221252 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.221261 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.323611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.323653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.323683 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.323699 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.323710 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.425964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.426002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.426010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.426025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.426035 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.528535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.528567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.528575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.528594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.528603 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.632196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.632266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.632282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.632316 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.632332 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.703890 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:25 crc kubenswrapper[4869]: E0106 14:00:25.704074 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.706807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.706887 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:25 crc kubenswrapper[4869]: E0106 14:00:25.706983 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:25 crc kubenswrapper[4869]: E0106 14:00:25.707179 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.735782 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.735860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.735879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.735906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.735931 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.839271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.839339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.839355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.839382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.839402 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.943720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.943772 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.943783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.943800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:25 crc kubenswrapper[4869]: I0106 14:00:25.943812 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:25Z","lastTransitionTime":"2026-01-06T14:00:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.048836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.049329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.049359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.049391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.049413 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.152898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.152981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.153003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.153033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.153053 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.257126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.257183 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.257201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.257229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.257247 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.360932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.360980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.360990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.361010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.361021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.464084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.464148 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.464165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.464191 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.464210 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.566413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.566483 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.566500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.566529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.566548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.671044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.671108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.671129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.671156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.671174 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.704236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:26 crc kubenswrapper[4869]: E0106 14:00:26.704495 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.706780 4869 scope.go:117] "RemoveContainer" containerID="5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.774907 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.775053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.775072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.775105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.775117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.878377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.878540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.878650 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.878854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.879019 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.982635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.982703 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.982738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.982758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:26 crc kubenswrapper[4869]: I0106 14:00:26.982769 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:26Z","lastTransitionTime":"2026-01-06T14:00:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.085549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.085605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.085622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.085643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.085657 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.090476 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/1.log" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.093650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.094396 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.111937 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.131600 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.147650 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.171563 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.188336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.188385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.188395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.188412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.188423 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.197155 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.212248 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.230184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.244721 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.256714 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.267555 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.286457 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.291249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.291285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.291294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.291312 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.291323 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.308043 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.322238 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.334195 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.356208 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:27Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.393898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.393944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.393954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.393968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.393978 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.497215 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.497255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.497264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.497278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.497289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.600863 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.600922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.600934 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.600954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.600965 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.703646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.703731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.703642 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:27 crc kubenswrapper[4869]: E0106 14:00:27.703881 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:27 crc kubenswrapper[4869]: E0106 14:00:27.703796 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:27 crc kubenswrapper[4869]: E0106 14:00:27.704087 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.704397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.704449 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.704461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.704479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.704493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.807276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.807361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.807385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.807416 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.807439 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.910248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.910320 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.910344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.910374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:27 crc kubenswrapper[4869]: I0106 14:00:27.910400 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:27Z","lastTransitionTime":"2026-01-06T14:00:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.013966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.014032 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.014044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.014061 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.014080 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.100060 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/2.log" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.100847 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/1.log" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.104070 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" exitCode=1 Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.104141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.104199 4869 scope.go:117] "RemoveContainer" containerID="5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.105460 4869 scope.go:117] "RemoveContainer" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" Jan 06 14:00:28 crc kubenswrapper[4869]: E0106 14:00:28.105744 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.117928 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.118218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.118318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.118408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.118498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.125062 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.157631 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.181332 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.207404 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.222466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.222537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.222549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.222786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.222801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.229145 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.245924 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.247518 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.265921 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.270828 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.288211 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.306127 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.324254 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.325345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.325411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.325428 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.325454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.325471 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.340908 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.358338 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.379014 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.394314 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.410724 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.428499 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.429160 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.429324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.429392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.429463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.429546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.459961 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b810666160b15b302045047eba5951adf2abd173a82fe51f769af08ecfafbce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"eflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675955 6306 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.676162 6306 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676256 6306 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:14.675654 6306 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0106 14:00:14.676745 6306 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:14.676792 6306 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0106 14:00:14.676804 6306 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0106 14:00:14.676841 6306 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:14.676853 6306 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:14.676861 6306 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:14.677837 6306 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.478525 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.497076 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.512423 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.531161 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.532647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.532823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.532924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.533007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.533083 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.547862 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.564880 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.583533 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.596570 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.618984 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.635259 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.637062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.637123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.637142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.637168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.637187 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.655780 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.670617 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.681278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.696288 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:28Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.703599 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:28 crc kubenswrapper[4869]: E0106 14:00:28.703953 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.745807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.745872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.745884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.745908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.745921 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.849289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.849390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.849403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.849419 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.849851 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.953793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.953847 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.953858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.953880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:28 crc kubenswrapper[4869]: I0106 14:00:28.953895 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:28Z","lastTransitionTime":"2026-01-06T14:00:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.057121 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.057184 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.057201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.057226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.057244 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.108988 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/2.log" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.115199 4869 scope.go:117] "RemoveContainer" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" Jan 06 14:00:29 crc kubenswrapper[4869]: E0106 14:00:29.115546 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.133877 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.160721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.160806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.160832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.160869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.160896 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.171729 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.189744 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.204872 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.224526 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.242056 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.260656 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.264607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.264680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.264697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.264719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.264735 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.281993 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.307460 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.329948 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.349522 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.363786 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.368707 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.368774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.368789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.368817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.368834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.381389 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.394184 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.408378 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.425778 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:29Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.473078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.473137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.473151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.473170 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.473182 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.577007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.577060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.577072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.577089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.577103 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.680025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.680093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.680110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.680133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.680150 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.704019 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.704204 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.704375 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:29 crc kubenswrapper[4869]: E0106 14:00:29.704351 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:29 crc kubenswrapper[4869]: E0106 14:00:29.704646 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:29 crc kubenswrapper[4869]: E0106 14:00:29.705145 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.782921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.782968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.782977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.782994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.783006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.885891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.885981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.885995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.886016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.886030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.990209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.990310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.990323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.990345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:29 crc kubenswrapper[4869]: I0106 14:00:29.990372 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:29Z","lastTransitionTime":"2026-01-06T14:00:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.093721 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.094123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.094204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.094306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.094400 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.197870 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.197921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.197930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.197951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.197964 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.301476 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.301531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.301540 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.301557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.301568 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.404522 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.404632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.404704 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.404746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.404774 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.472998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:30 crc kubenswrapper[4869]: E0106 14:00:30.473354 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:30 crc kubenswrapper[4869]: E0106 14:00:30.473543 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:00:46.473501223 +0000 UTC m=+65.013188927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.507596 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.507729 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.507739 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.507784 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.507796 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.609988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.610034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.610047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.610064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.610076 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.703364 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:30 crc kubenswrapper[4869]: E0106 14:00:30.703509 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.713503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.713651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.713713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.713828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.713854 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.817438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.817514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.817535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.817559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.817579 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.920999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.921045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.921089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.921112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:30 crc kubenswrapper[4869]: I0106 14:00:30.921127 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:30Z","lastTransitionTime":"2026-01-06T14:00:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.023937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.023996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.024013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.024037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.024055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.126362 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.126397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.126406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.126420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.126430 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.229602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.229697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.229711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.229734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.229748 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.332993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.333064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.333076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.333096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.333111 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.436488 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.436536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.436546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.436590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.436616 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.539435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.539500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.539519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.539551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.539572 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.642728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.642789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.642803 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.642820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.642839 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.703966 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:31 crc kubenswrapper[4869]: E0106 14:00:31.704091 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.704304 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.704363 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:31 crc kubenswrapper[4869]: E0106 14:00:31.704557 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:31 crc kubenswrapper[4869]: E0106 14:00:31.704722 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.721432 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.745888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.745955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.745969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.745988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.746000 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.755373 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.777430 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.796655 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.814199 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.832223 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.848957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.849018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.849034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.849056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.849071 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.849062 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.892352 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.916749 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.947823 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.951221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.951249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.951257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.951271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.951282 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:31Z","lastTransitionTime":"2026-01-06T14:00:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.965476 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.983128 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:31 crc kubenswrapper[4869]: I0106 14:00:31.994026 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:31Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.004994 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:32Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.016505 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:32Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.026278 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:32Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.053546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.053602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.053614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.053635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.053650 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.156206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.156246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.156254 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.156271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.156281 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.258943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.258994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.259005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.259023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.259036 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.361278 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.361328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.361340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.361356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.361365 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.464770 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.464842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.464866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.464897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.464917 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.567694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.567741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.567774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.567788 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.567799 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.671034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.671116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.671132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.671153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.671167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.704566 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:32 crc kubenswrapper[4869]: E0106 14:00:32.704929 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.774111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.774946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.775079 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.775210 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.775300 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.878800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.878857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.878875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.878901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.878924 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.981622 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.981715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.981734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.981755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:32 crc kubenswrapper[4869]: I0106 14:00:32.981775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:32Z","lastTransitionTime":"2026-01-06T14:00:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.084361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.084394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.084403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.084433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.084444 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.188028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.188350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.188516 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.188638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.188960 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.292486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.292539 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.292558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.292581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.292600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.395427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.395489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.395505 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.395523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.395536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.498406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.498447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.498458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.498473 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.498487 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.510079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.510187 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510225 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.510235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510335 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:01:05.510307143 +0000 UTC m=+84.049994807 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510365 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510394 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.510392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510412 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510531 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:01:05.510506307 +0000 UTC m=+84.050194011 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510548 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510570 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510582 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510625 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:01:05.510613649 +0000 UTC m=+84.050301323 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510658 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.510730 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:01:05.510712241 +0000 UTC m=+84.050400015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.610521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.610572 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.610583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.610599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.610610 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.611992 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.612273 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:01:05.612251202 +0000 UTC m=+84.151938876 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.657946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.657992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.658005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.658029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.658040 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.676405 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:33Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.681141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.681313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.681371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.681460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.681520 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.698561 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:33Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.703931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.703945 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.704319 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704496 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.704582 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.704630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.704487 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.728272 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:33Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.733692 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.733728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.733740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.733758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.733770 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.750640 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:33Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.755022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.755075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.755088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.755108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.755121 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.770816 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:33Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:33 crc kubenswrapper[4869]: E0106 14:00:33.771015 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.773494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.773555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.773571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.773624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.773639 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.876690 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.877010 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.877109 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.877207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.877284 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.980740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.981056 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.981137 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.981208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:33 crc kubenswrapper[4869]: I0106 14:00:33.981269 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:33Z","lastTransitionTime":"2026-01-06T14:00:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.084760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.084815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.084827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.084850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.084867 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.187957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.188000 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.188013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.188030 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.188043 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.291681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.291745 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.291766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.291787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.291802 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.395021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.395069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.395080 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.395099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.395110 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.498003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.498038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.498046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.498058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.498068 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.600401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.600440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.600453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.600470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.600480 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.702810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.702846 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.702855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.702869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.702879 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.703383 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:34 crc kubenswrapper[4869]: E0106 14:00:34.703500 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.804858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.804889 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.804897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.804909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.804919 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.906937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.906974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.906983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.906995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:34 crc kubenswrapper[4869]: I0106 14:00:34.907003 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:34Z","lastTransitionTime":"2026-01-06T14:00:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.010295 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.010330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.010338 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.010352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.010361 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.113111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.113152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.113165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.113181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.113191 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.215843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.215890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.215908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.215925 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.215937 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.320654 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.320702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.320710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.320723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.320736 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.423209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.423260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.423275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.423296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.423311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.525877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.525915 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.525923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.525938 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.525947 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.627747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.627787 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.627800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.627819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.627832 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.703410 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.703496 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:35 crc kubenswrapper[4869]: E0106 14:00:35.703639 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.703713 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:35 crc kubenswrapper[4869]: E0106 14:00:35.703855 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:35 crc kubenswrapper[4869]: E0106 14:00:35.704034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.730335 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.730379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.730411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.730445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.730457 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.833935 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.833989 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.834001 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.834023 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.834037 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.936302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.936350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.936359 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.936374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:35 crc kubenswrapper[4869]: I0106 14:00:35.936382 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:35Z","lastTransitionTime":"2026-01-06T14:00:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.039301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.039379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.039396 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.039424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.039449 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.142028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.142102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.142127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.142162 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.142185 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.245620 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.245740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.245760 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.245790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.245810 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.349157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.349208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.349221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.349243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.349257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.452277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.452336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.452354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.452375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.452388 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.554976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.555083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.555101 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.555132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.555150 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.658020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.658093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.658111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.658139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.658157 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.703526 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:36 crc kubenswrapper[4869]: E0106 14:00:36.703684 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.760944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.761046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.761068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.761105 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.761135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.863974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.864013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.864024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.864039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.864052 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.966855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.966959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.966979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.967006 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:36 crc kubenswrapper[4869]: I0106 14:00:36.967026 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:36Z","lastTransitionTime":"2026-01-06T14:00:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.023481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.040959 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.063384 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.069821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.069886 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.069912 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.069944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.069968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.084118 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.109016 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.123702 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.139592 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.159153 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.189014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.189069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.189177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.189197 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.189209 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.200438 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.211555 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.229788 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.244147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.255022 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.265778 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.279933 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.291403 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.291593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.291719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.291820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.291898 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.295845 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.308531 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:37Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.394921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.394998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.395013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.395037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.395052 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.498394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.498878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.499020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.499176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.499296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.603051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.603111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.603122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.603141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.603155 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.703680 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.703729 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:37 crc kubenswrapper[4869]: E0106 14:00:37.703823 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.703959 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:37 crc kubenswrapper[4869]: E0106 14:00:37.704164 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:37 crc kubenswrapper[4869]: E0106 14:00:37.704240 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.705825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.705872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.705884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.705906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.705922 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.808655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.808715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.808725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.808741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.808750 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.911559 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.911632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.911653 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.911710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:37 crc kubenswrapper[4869]: I0106 14:00:37.911732 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:37Z","lastTransitionTime":"2026-01-06T14:00:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.014117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.014198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.014214 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.014236 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.014252 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.118087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.118168 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.118189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.118218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.118241 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.221855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.221923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.221936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.221962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.221982 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.325351 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.325410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.325433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.325461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.325480 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.428894 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.428972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.428996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.429029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.429056 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.533065 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.533133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.533146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.533169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.533184 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.636979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.637068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.637086 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.637153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.637175 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.704392 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:38 crc kubenswrapper[4869]: E0106 14:00:38.704578 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.740625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.740687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.740697 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.740710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.740720 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.844506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.844561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.844571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.844586 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.844598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.947808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.947895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.947920 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.947956 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:38 crc kubenswrapper[4869]: I0106 14:00:38.947993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:38Z","lastTransitionTime":"2026-01-06T14:00:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.051433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.051509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.051530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.051561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.051582 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.155129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.155211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.155238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.155271 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.155296 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.258492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.258573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.258595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.258624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.258642 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.361470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.361546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.361582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.361603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.361613 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.464919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.464980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.465003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.465036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.465059 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.569212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.569282 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.569303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.569326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.569342 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.672055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.672108 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.672125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.672149 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.672167 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.703807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.704032 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.704253 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:39 crc kubenswrapper[4869]: E0106 14:00:39.704245 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:39 crc kubenswrapper[4869]: E0106 14:00:39.704482 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:39 crc kubenswrapper[4869]: E0106 14:00:39.704549 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.775716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.775791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.775815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.775866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.775896 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.879110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.879177 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.879195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.879226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.879246 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.983465 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.983536 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.983555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.983581 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:39 crc kubenswrapper[4869]: I0106 14:00:39.983598 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:39Z","lastTransitionTime":"2026-01-06T14:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.087450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.087502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.087513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.087532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.087546 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.190326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.190397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.190415 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.190442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.190462 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.293461 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.293570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.293595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.293628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.293649 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.397294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.397370 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.397391 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.397423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.397444 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.501144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.501225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.501243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.501270 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.501290 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.604740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.604810 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.604835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.604869 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.604897 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.703570 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:40 crc kubenswrapper[4869]: E0106 14:00:40.703851 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.708384 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.708447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.708475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.708507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.708530 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.812118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.812201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.812218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.812251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.812271 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.915531 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.915607 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.915627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.915660 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:40 crc kubenswrapper[4869]: I0106 14:00:40.915707 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:40Z","lastTransitionTime":"2026-01-06T14:00:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.019193 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.019279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.019304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.019336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.019360 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.122793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.122871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.122890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.122919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.122945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.227300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.227387 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.227426 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.227467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.227493 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.331853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.331930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.331954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.331992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.332021 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.435944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.436024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.436049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.436081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.436103 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.539181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.539348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.539383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.539447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.539476 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.643008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.643066 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.643083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.643110 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.643129 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.721619 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:41 crc kubenswrapper[4869]: E0106 14:00:41.721908 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.722936 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:41 crc kubenswrapper[4869]: E0106 14:00:41.723281 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.723617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:41 crc kubenswrapper[4869]: E0106 14:00:41.723870 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.741379 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.747155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.747199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.747209 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.747223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.747234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.770385 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.789520 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.815749 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.835957 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850645 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850680 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.850752 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.866315 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.878715 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.894675 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.908475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.925690 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.946329 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.953562 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.953765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.953836 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.953910 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.953975 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:41Z","lastTransitionTime":"2026-01-06T14:00:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.962051 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.974398 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.988111 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:41 crc kubenswrapper[4869]: I0106 14:00:41.999920 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:41Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.056018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.056161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.056242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.056340 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.056539 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.159521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.159561 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.159573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.159590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.159603 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.262417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.262458 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.262467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.262482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.262491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.365240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.365288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.365303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.365323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.365336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.468094 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.468144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.468156 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.468173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.468187 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.571796 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.571850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.571875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.571896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.571910 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.675165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.675244 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.675266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.675293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.675311 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.704380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:42 crc kubenswrapper[4869]: E0106 14:00:42.704611 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.779499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.779564 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.779582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.779609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.779630 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.882575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.882633 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.882649 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.882674 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.882687 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.984911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.984963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.984971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.984987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:42 crc kubenswrapper[4869]: I0106 14:00:42.984998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:42Z","lastTransitionTime":"2026-01-06T14:00:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.087890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.087944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.087957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.087978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.087992 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.191398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.191443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.191452 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.191468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.191479 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.294378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.294434 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.294448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.294469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.294483 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.398916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.398990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.399005 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.399033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.399046 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.502879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.502945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.502957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.502977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.502993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.605979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.606055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.606072 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.606103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.606122 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.703380 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.703404 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:43 crc kubenswrapper[4869]: E0106 14:00:43.703547 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.703607 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:43 crc kubenswrapper[4869]: E0106 14:00:43.703872 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:43 crc kubenswrapper[4869]: E0106 14:00:43.703933 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.708298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.708347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.708357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.708394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.708404 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.812726 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.812791 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.812807 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.812832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.812849 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.916392 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.917431 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.917453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.917529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.917551 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.995780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.995833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.995844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.995862 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:43 crc kubenswrapper[4869]: I0106 14:00:43.995874 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:43Z","lastTransitionTime":"2026-01-06T14:00:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.013337 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:44Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.019578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.019621 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.019636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.019659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.019680 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.034519 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:44Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.039457 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.039486 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.039495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.039511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.039526 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.054483 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:44Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.059287 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.059374 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.059404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.059437 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.059457 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.076256 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:44Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.081379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.081463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.081487 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.081519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.081543 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.095189 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:44Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.095333 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.096850 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.096877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.096887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.096900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.096911 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.199614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.199657 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.199678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.199727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.199743 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.303744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.303852 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.303865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.303883 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.303893 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.407541 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.407589 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.407601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.407619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.407633 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.512165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.512261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.512307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.512337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.512361 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.616018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.616068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.616078 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.616100 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.616117 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.704219 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.705256 4869 scope.go:117] "RemoveContainer" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.705493 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:00:44 crc kubenswrapper[4869]: E0106 14:00:44.705870 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.719159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.719203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.719213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.719232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.719243 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.821823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.821873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.821882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.821898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.821907 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.925446 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.925484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.925495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.925515 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:44 crc kubenswrapper[4869]: I0106 14:00:44.925527 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:44Z","lastTransitionTime":"2026-01-06T14:00:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.030123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.030190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.030199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.030216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.030227 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.133489 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.133574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.133599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.133640 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.133676 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.237625 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.237710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.237727 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.237749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.237764 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.340503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.340571 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.340584 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.340605 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.340617 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.442884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.442948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.442961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.442980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.442993 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.545474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.545519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.545528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.545543 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.545553 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.648242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.648313 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.648323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.648341 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.648351 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.704040 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.704118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.704140 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:45 crc kubenswrapper[4869]: E0106 14:00:45.704197 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:45 crc kubenswrapper[4869]: E0106 14:00:45.704328 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:45 crc kubenswrapper[4869]: E0106 14:00:45.704413 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.751469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.751560 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.751585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.751618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.751640 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.855126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.855175 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.855187 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.855207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.855221 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.957970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.958025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.958036 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.958051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:45 crc kubenswrapper[4869]: I0106 14:00:45.958060 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:45Z","lastTransitionTime":"2026-01-06T14:00:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.061088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.061142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.061152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.061171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.061182 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.163269 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.163314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.163328 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.163347 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.163357 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.266510 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.266573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.266585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.266599 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.266608 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.368878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.368917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.368946 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.368960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.368969 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.471216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.471249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.471258 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.471273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.471282 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.479266 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:46 crc kubenswrapper[4869]: E0106 14:00:46.479390 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:46 crc kubenswrapper[4869]: E0106 14:00:46.479441 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:01:18.479425941 +0000 UTC m=+97.019113605 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.574179 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.574237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.574248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.574265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.574279 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.676293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.676346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.676390 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.676412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.676424 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.704122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:46 crc kubenswrapper[4869]: E0106 14:00:46.704273 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.778676 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.778741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.778758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.778783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.778801 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.881606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.881681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.881696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.881713 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.881726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.984466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.984504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.984513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.984528 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:46 crc kubenswrapper[4869]: I0106 14:00:46.984539 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:46Z","lastTransitionTime":"2026-01-06T14:00:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.092024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.092064 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.092074 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.092089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.092098 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.193839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.193875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.193885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.193900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.193910 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.296619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.296725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.296749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.296775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.296794 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.398749 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.398804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.398815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.398829 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.398839 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.500899 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.500947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.500957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.500972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.500981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.603381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.603432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.603445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.603467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.603482 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.704180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.704199 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.704213 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:47 crc kubenswrapper[4869]: E0106 14:00:47.704316 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:47 crc kubenswrapper[4869]: E0106 14:00:47.704428 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:47 crc kubenswrapper[4869]: E0106 14:00:47.704558 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.707929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.707957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.707967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.707982 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.707992 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.811549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.811606 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.811624 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.811651 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.811674 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.914896 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.914948 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.914958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.914976 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:47 crc kubenswrapper[4869]: I0106 14:00:47.914988 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:47Z","lastTransitionTime":"2026-01-06T14:00:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.018173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.018221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.018232 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.018247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.018257 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.121303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.121360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.121373 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.121395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.121409 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.224196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.224247 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.224264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.224330 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.224347 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.327262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.327301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.327311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.327326 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.327336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.430978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.431046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.431083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.431122 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.431135 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.534382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.534764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.534855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.534937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.534996 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.637069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.637102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.637111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.637125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.637134 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.703989 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:48 crc kubenswrapper[4869]: E0106 14:00:48.704146 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.739900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.740240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.740325 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.740402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.740471 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.843709 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.844026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.844111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.844195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.844270 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.947307 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.947352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.947365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.947381 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:48 crc kubenswrapper[4869]: I0106 14:00:48.947392 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:48Z","lastTransitionTime":"2026-01-06T14:00:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.050288 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.050336 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.050349 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.050368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.050382 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.153554 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.153873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.153949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.154034 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.154094 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.257911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.257952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.257962 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.257978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.257990 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.360406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.360442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.360450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.360464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.360474 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.462967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.463304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.463400 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.463490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.463575 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.565696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.565733 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.565744 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.565767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.565778 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.668617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.668696 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.668710 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.668734 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.668744 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.703723 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:49 crc kubenswrapper[4869]: E0106 14:00:49.703884 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.704104 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:49 crc kubenswrapper[4869]: E0106 14:00:49.704166 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.704403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:49 crc kubenswrapper[4869]: E0106 14:00:49.704532 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.771057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.771103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.771117 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.771133 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.771686 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.874678 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.874728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.874738 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.874756 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.874767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.977146 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.977188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.977198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.977212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:49 crc kubenswrapper[4869]: I0106 14:00:49.977222 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:49Z","lastTransitionTime":"2026-01-06T14:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.079746 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.079789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.079798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.079816 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.079828 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.182417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.182475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.182503 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.182519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.182529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.193255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/0.log" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.193319 4869 generic.go:334] "Generic (PLEG): container finished" podID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" containerID="7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724" exitCode=1 Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.193358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerDied","Data":"7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.193868 4869 scope.go:117] "RemoveContainer" containerID="7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.229831 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.247270 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.262801 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.281062 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.286150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.286229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.286246 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.286293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.286308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.297567 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.311836 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.327476 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.343048 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.361495 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.381930 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.388911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.388955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.388968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.389013 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.389030 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.397261 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.413068 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.428042 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.443558 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.457841 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.473433 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:50Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.491462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.491493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.491500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.491514 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.491525 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.594161 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.594207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.594218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.594234 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.594245 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.697298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.697343 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.697353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.697369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.697378 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.703697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:50 crc kubenswrapper[4869]: E0106 14:00:50.703834 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.801002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.801087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.801099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.801119 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.801131 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.904189 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.904255 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.904274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.904304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:50 crc kubenswrapper[4869]: I0106 14:00:50.904322 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:50Z","lastTransitionTime":"2026-01-06T14:00:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.007639 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.007714 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.007728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.007750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.007763 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.112165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.112216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.112225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.112243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.112256 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.199249 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/0.log" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.199340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerStarted","Data":"4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.215790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.215818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.215827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.215840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.215850 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.218149 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.232386 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.250024 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.262613 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.277419 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.291963 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.303111 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.315967 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.318308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.318364 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.318382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.318408 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.318426 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.331755 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.353418 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.369547 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.385843 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.404836 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.420256 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.421893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.421950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.421964 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.422018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.422033 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.439263 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.457454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.524455 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.524508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.524521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.524542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.524558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.631895 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.631972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.631993 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.632024 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.632051 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.704558 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:51 crc kubenswrapper[4869]: E0106 14:00:51.704731 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.704882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.704981 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:51 crc kubenswrapper[4869]: E0106 14:00:51.705099 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:51 crc kubenswrapper[4869]: E0106 14:00:51.705256 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.714417 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.732099 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.735808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.735864 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.735877 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.735897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.735910 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.744889 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.762793 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.776105 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.791862 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.807352 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.823550 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.837766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.837793 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.837802 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.837818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.837831 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.840138 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.860467 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.883348 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.903369 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.922198 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941066 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941253 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941303 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.941314 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:51Z","lastTransitionTime":"2026-01-06T14:00:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.956894 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:51 crc kubenswrapper[4869]: I0106 14:00:51.972829 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:51Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.043900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.043932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.043943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.043959 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.043971 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.146261 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.146617 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.146630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.146652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.146663 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.249630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.249698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.249715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.249732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.249742 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.352583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.352644 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.352665 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.352750 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.352765 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.455901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.455947 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.455957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.455972 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.455985 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.558549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.558610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.558619 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.558634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.558645 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.661542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.661611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.661626 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.661652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.661687 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.703656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:52 crc kubenswrapper[4869]: E0106 14:00:52.704033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.765491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.765557 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.765574 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.765597 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.765611 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.868016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.868058 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.868069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.868085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.868095 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.971198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.971257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.971276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.971306 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:52 crc kubenswrapper[4869]: I0106 14:00:52.971336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:52Z","lastTransitionTime":"2026-01-06T14:00:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.074615 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.074682 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.074694 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.074716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.074729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.178337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.178389 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.178402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.178420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.178433 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.280985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.281052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.281067 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.281091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.281108 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.385033 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.385084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.385097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.385115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.385130 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.488353 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.488401 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.488411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.488429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.488442 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.592500 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.592550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.592570 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.592595 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.592612 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.696304 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.696348 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.696361 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.696378 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.696390 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.704280 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:53 crc kubenswrapper[4869]: E0106 14:00:53.704453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.704534 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:53 crc kubenswrapper[4869]: E0106 14:00:53.705100 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.705589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:53 crc kubenswrapper[4869]: E0106 14:00:53.705707 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.799386 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.799441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.799454 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.799477 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.799491 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.903382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.903435 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.903447 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.903471 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:53 crc kubenswrapper[4869]: I0106 14:00:53.903487 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:53Z","lastTransitionTime":"2026-01-06T14:00:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.006407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.006474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.006502 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.006534 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.006561 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.109821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.109879 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.109891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.109911 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.110321 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.212789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.212840 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.212858 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.212884 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.212902 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.316129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.316195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.316212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.316237 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.316255 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.419873 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.420206 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.420329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.420411 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.420481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.445429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.445698 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.445827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.445918 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.445982 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.467582 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:54Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.472906 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.472965 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.472975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.472994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.473005 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.489905 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:54Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.494628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.494718 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.494741 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.494767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.494787 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.517722 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:54Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.522140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.522196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.522211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.522239 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.522258 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.535621 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:54Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.540881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.540967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.540990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.541020 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.541041 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.562656 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:00:54Z is after 2025-08-24T17:21:41Z" Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.562937 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.565927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.566049 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.566071 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.566103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.566124 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.676827 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.677221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.677350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.677563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.677711 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.703533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:54 crc kubenswrapper[4869]: E0106 14:00:54.703718 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.781037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.781535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.782026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.782406 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.782859 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.886324 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.886368 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.886380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.886398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.886411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.989290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.989337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.989350 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.989369 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:54 crc kubenswrapper[4869]: I0106 14:00:54.989383 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:54Z","lastTransitionTime":"2026-01-06T14:00:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.092875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.092937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.092951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.092975 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.092987 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.196075 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.196132 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.196145 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.196163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.196177 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.298815 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.298885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.298909 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.298937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.298978 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.402839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.402952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.402992 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.403029 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.403053 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.506983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.507053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.507076 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.507107 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.507133 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.611196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.611245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.611256 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.611279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.611293 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.704076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.704095 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:55 crc kubenswrapper[4869]: E0106 14:00:55.704251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.704288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:55 crc kubenswrapper[4869]: E0106 14:00:55.704431 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:55 crc kubenswrapper[4869]: E0106 14:00:55.704619 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.714073 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.714104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.714116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.714129 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.714140 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.817533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.817600 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.817623 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.817656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.817740 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.921985 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.922614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.923091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.923310 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:55 crc kubenswrapper[4869]: I0106 14:00:55.923511 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:55Z","lastTransitionTime":"2026-01-06T14:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.029424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.030016 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.030407 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.030955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.031432 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.134768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.134819 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.134834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.134855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.134869 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.237577 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.238285 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.238410 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.238546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.238654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.342960 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.343038 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.343063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.343088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.343109 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.446766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.447172 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.447319 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.447451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.447548 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.551230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.551602 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.551955 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.552251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.552553 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.656200 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.656238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.656248 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.656264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.656275 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.704001 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:56 crc kubenswrapper[4869]: E0106 14:00:56.704197 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.778491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.778526 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.778535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.778551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.778560 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.881015 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.881068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.881077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.881099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.881112 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.984112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.984205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.984229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.984266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:56 crc kubenswrapper[4869]: I0106 14:00:56.984289 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:56Z","lastTransitionTime":"2026-01-06T14:00:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.087301 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.087365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.087383 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.087413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.087446 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.190643 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.191243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.191414 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.191580 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.191806 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.296296 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.296404 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.296429 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.296463 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.296488 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.401152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.401202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.401212 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.401230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.401240 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.504859 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.504930 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.504949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.504979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.504998 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.608257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.608300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.608311 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.608329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.608339 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.704254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.704273 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.704398 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:57 crc kubenswrapper[4869]: E0106 14:00:57.704585 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:57 crc kubenswrapper[4869]: E0106 14:00:57.704688 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:57 crc kubenswrapper[4869]: E0106 14:00:57.704800 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.710004 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.710041 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.710053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.710069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.710081 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.813958 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.814028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.814037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.814054 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.814070 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.916513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.917229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.917297 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.917372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:57 crc kubenswrapper[4869]: I0106 14:00:57.917448 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:57Z","lastTransitionTime":"2026-01-06T14:00:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.019892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.020230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.020321 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.020422 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.020505 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.123300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.123656 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.123767 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.123833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.123889 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.227567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.227614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.227627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.227648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.227683 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.330093 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.330153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.330165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.330185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.330198 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.433679 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.433724 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.433736 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.433754 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.433767 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.536398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.536442 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.536453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.536469 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.536479 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.639513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.639575 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.639587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.639609 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.639623 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.703435 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:00:58 crc kubenswrapper[4869]: E0106 14:00:58.704214 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.742743 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.743264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.743421 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.743630 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.743850 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.847123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.847490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.847582 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.847765 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.847892 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.951978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.952046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.952062 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.952083 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:58 crc kubenswrapper[4869]: I0106 14:00:58.952097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:58Z","lastTransitionTime":"2026-01-06T14:00:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.055163 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.055202 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.055211 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.055225 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.055235 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.158634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.158691 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.158702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.158719 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.158731 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.261092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.261130 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.261138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.261151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.261160 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.364045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.364096 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.364104 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.364120 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.364134 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.466186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.466241 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.466251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.466266 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.466275 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.568898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.568932 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.568942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.568957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.568968 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.671823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.671876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.671890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.671908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.671920 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.703530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.703538 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:00:59 crc kubenswrapper[4869]: E0106 14:00:59.703701 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.703743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:00:59 crc kubenswrapper[4869]: E0106 14:00:59.704203 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:00:59 crc kubenswrapper[4869]: E0106 14:00:59.704320 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.704596 4869 scope.go:117] "RemoveContainer" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.774786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.774843 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.774854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.774874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.774887 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.877413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.877467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.877479 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.877495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.877715 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.980723 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.980786 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.980800 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.980818 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:00:59 crc kubenswrapper[4869]: I0106 14:00:59.980834 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:00:59Z","lastTransitionTime":"2026-01-06T14:00:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.083441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.083493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.083504 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.083523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.083535 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.186084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.186150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.186165 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.186186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.186199 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.234062 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/2.log" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.236973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.237398 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.258248 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.274142 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.286857 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.288842 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.288901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.288922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.289042 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.289055 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.298147 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.309796 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.329297 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.342452 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.353037 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.363964 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.375456 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.387881 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.392226 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.392274 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.392284 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.392300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.392316 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.403454 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.418057 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.432851 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.450750 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.462584 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:00Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.495047 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.495124 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.495142 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.495178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.495197 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.599578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.599638 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.599647 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.599685 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.599698 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.702116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.702157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.702167 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.702181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.702191 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.703730 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:00 crc kubenswrapper[4869]: E0106 14:01:00.704005 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.804329 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.804379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.804393 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.804413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.804427 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.907689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.907768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.907777 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.907798 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:00 crc kubenswrapper[4869]: I0106 14:01:00.907809 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:00Z","lastTransitionTime":"2026-01-06T14:01:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.010832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.010937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.010983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.011009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.011026 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.115097 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.115157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.115169 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.115190 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.115204 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.218887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.218969 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.218991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.219022 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.219046 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.329138 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.329192 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.329203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.329221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.329234 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.432521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.432583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.432593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.432614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.432626 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.535494 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.535569 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.535591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.535629 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.535654 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.638900 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.638990 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.639011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.639043 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.639063 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.704020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.704129 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.704042 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:01 crc kubenswrapper[4869]: E0106 14:01:01.704243 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:01 crc kubenswrapper[4869]: E0106 14:01:01.704383 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:01 crc kubenswrapper[4869]: E0106 14:01:01.704548 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.744417 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.744519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.744550 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.744585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.744612 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.748503 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.777057 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.798251 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.817501 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865527 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865583 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865604 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865577 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.865645 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.881174 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.897730 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.918981 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.938105 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.955651 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.968188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.968231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.968245 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.968265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.968281 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:01Z","lastTransitionTime":"2026-01-06T14:01:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:01 crc kubenswrapper[4869]: I0106 14:01:01.976988 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.001893 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:01Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.020921 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.045707 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.069234 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.070281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.070375 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.070440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.070535 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.070620 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.094519 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.174957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.175060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.175089 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.175135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.175163 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.245328 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/3.log" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.245965 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/2.log" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.248774 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" exitCode=1 Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.248839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.248999 4869 scope.go:117] "RemoveContainer" containerID="15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.249727 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:01:02 crc kubenswrapper[4869]: E0106 14:01:02.249996 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.270290 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"049f0484-d635-4877-9fdb-16aa6a1970d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-06T14:00:00Z\\\",\\\"message\\\":\\\"W0106 14:00:00.133490 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0106 14:00:00.133877 1 crypto.go:601] Generating new CA for check-endpoints-signer@1767708000 cert, and key in /tmp/serving-cert-3727702799/serving-signer.crt, /tmp/serving-cert-3727702799/serving-signer.key\\\\nI0106 14:00:00.554347 1 observer_polling.go:159] Starting file observer\\\\nW0106 14:00:00.562655 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0106 14:00:00.562828 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0106 14:00:00.563463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3727702799/tls.crt::/tmp/serving-cert-3727702799/tls.key\\\\\\\"\\\\nI0106 14:00:00.966602 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0106 14:00:00.969522 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0106 14:00:00.969550 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0106 14:00:00.969579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0106 14:00:00.969586 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF0106 14:00:00.977611 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.278592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.278648 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.278688 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.278711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.278726 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.284090 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.301267 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cca4d7e4-e530-4ffc-a1a3-5f5b7c758d74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9eb2249e7576a3c4966df2cc7197be2735afc04707bbe2a11e9a2d035b170b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eab114986305dea32646a57840d11d5aa911408b435ba1f0e3693b05ed73325\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d64aff1caf5fe6fdd78a0054dadad600cb1125a0ead2d2a70a989f16e4dd5d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ba5f61dbbf29f64b223cf3fb8a49e95b374abff5cf0eb6bf4f43c44d9f7db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8b97db85a6e84d006d604c7e812110c19edf7d112e7c31091e588c06a4a008a2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://644923ea14526bc67cdd19a768749862f56ebeaf0eaefb56dd8ba8865e490bfb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b044b41fe3edbb87c63c6b542df7a6a6e8d7dee87e3a1ce4d0ab81c54850e73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bksmj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4b8g7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.317165 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2940a7ac-7d7a-4b21-805d-a6d2afa4a3af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://135cdf06b4dab396dd133be2b922d563745a0bfd2fc9dce55e2cdbb2a3447ecc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b3d2c1a91a8a2b3549c9a11e1424037b15b51e7701062eb7e95dff4dfb5cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8wdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-64qxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.332715 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.346277 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"89b72572-a31b-48f1-93f4-cbfad03736b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34d27dcddfa7d682bf191f6bffd4e98b02adbf825dcc61ee3ed639e32bcd28e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhcnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kt9df\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.361475 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vjd79" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be5e99e3-237b-417d-b5b1-95187549c6ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bdbdfa85caf5c0f50173add808d015e9e4d93aa4fb0e6cdf146a811a58a6aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vjd79\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.376368 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b86d961d-74c0-40cb-912d-ae0db79d97f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cndw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mmdq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.382123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.382279 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.382356 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.382441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.382518 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.393207 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlkdn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752ad1ae-d5af-4886-84af-a25fd3dd0eb9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ceaf30a08814268e8fc9ca795443810032353089feeaef2c417a9792e0adccc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc24f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlkdn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.412982 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"487c527a-7d89-4175-8827-c8cdd6e0211f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://15e5cc9f12cb8749c5af25260600f8c1e4c862a9442f59c5875c8b73096c561b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:27Z\\\",\\\"message\\\":\\\"or removal\\\\nI0106 14:00:27.642050 6512 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0106 14:00:27.642074 6512 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0106 14:00:27.642078 6512 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0106 14:00:27.642101 6512 factory.go:656] Stopping watch factory\\\\nI0106 14:00:27.642118 6512 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0106 14:00:27.642161 6512 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0106 14:00:27.642174 6512 handler.go:208] Removed *v1.Node event handler 7\\\\nI0106 14:00:27.642180 6512 handler.go:208] Removed *v1.Node event handler 2\\\\nI0106 14:00:27.642187 6512 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0106 14:00:27.642197 6512 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0106 14:00:27.642203 6512 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0106 14:00:27.642209 6512 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0106 14:00:27.642215 6512 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0106 14:00:27.642391 6512 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0106 14:00:27.642440 6512 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:01:02Z\\\",\\\"message\\\":\\\"t-machine-config-operator/machine-config-daemon-kt9df after 0 failed attempt(s)\\\\nI0106 14:01:00.651429 6917 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-kt9df\\\\nI0106 14:01:00.651225 6917 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0106 14:01:00.651444 6917 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0106 14:01:00.651451 6917 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0106 14:01:00.651457 6917 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0106 14:01:00.651269 6917 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0106 14:01:00.651460 6917 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}\\\\nI0106 14:01:00.651479 6917 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0106 14:01:00.651484 6917 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-857xw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2f9tq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.426865 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-68bvk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-06T14:00:49Z\\\",\\\"message\\\":\\\"2026-01-06T14:00:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6\\\\n2026-01-06T14:00:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2604fae8-ccd8-406e-ad13-a97252cbe9c6 to /host/opt/cni/bin/\\\\n2026-01-06T14:00:04Z [verbose] multus-daemon started\\\\n2026-01-06T14:00:04Z [verbose] Readiness Indicator file check\\\\n2026-01-06T14:00:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv4sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T14:00:01Z\\\"}}\" for pod \"openshift-multus\"/\"multus-68bvk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.437931 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cdc25c94-5921-41e8-99dc-fe1805225287\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-06T13:59:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a69058b488c453bb2e06695939568f0297a970aff932569db85da433feb5814d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://435bff2936635a82afe7ca4597f37b18da009622047b4c6f0908d2562fbf9067\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d94b86e136d1d14bac701960114e85125092e2d511e21bbec0a9b0f43e29989\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T13:59:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9379db0665d18753e2a182107335424277701859bb2b4c13f10bfaf06080cc74\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-06T13:59:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-06T13:59:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-06T13:59:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.451996 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d11e9097ed80ac14d60f5559338c4bbb6b554ac161b4dafe0fb89a4ff3930d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.468984 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:01Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.487039 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.487081 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.487092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.487115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.487128 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.489257 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:02Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ea0c32f6dd523dd43a479c696adee8b16b193e692dab02ecbd8686bc731e22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ed37b720bac4c884c9b05e018d6872f819c9fc99fdbf9beb9c3c655ae98eb82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.509643 4869 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-06T14:00:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aee87f8cc42308743afd1bc465d51cb786aeae04d0d0e9e5683647dc5415ba81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-06T14:00:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:02Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.590292 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.590377 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.590398 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.590432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.590454 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.694402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.694466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.694484 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.694511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.694528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.703943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:02 crc kubenswrapper[4869]: E0106 14:01:02.704143 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.798087 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.798159 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.798176 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.798205 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.798223 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.902166 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.902243 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.902260 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.902289 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:02 crc kubenswrapper[4869]: I0106 14:01:02.902308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:02Z","lastTransitionTime":"2026-01-06T14:01:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.005357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.005427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.005445 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.005472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.005490 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.109302 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.109366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.109385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.109413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.109433 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.212394 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.212441 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.212451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.212470 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.212481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.315409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.315468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.315482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.315501 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.315514 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.418856 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.418951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.418971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.419009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.419062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.522018 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.522927 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.522983 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.523028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.523054 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.625882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.625936 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.625950 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.625966 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.625978 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.703569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.703704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.703751 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:03 crc kubenswrapper[4869]: E0106 14:01:03.703931 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:03 crc kubenswrapper[4869]: E0106 14:01:03.704097 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:03 crc kubenswrapper[4869]: E0106 14:01:03.704151 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.730420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.730506 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.730517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.730542 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.730558 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.834833 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.834914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.834942 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.834981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.835015 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.938591 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.938634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.938646 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.938687 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:03 crc kubenswrapper[4869]: I0106 14:01:03.938702 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:03Z","lastTransitionTime":"2026-01-06T14:01:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.042610 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.042835 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.042865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.042904 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.042930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.146876 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.146937 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.146951 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.146974 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.146988 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.250652 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.250766 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.250792 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.250824 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.250844 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.261916 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/3.log" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.358031 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.358553 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.358567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.358592 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.358609 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.463150 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.463219 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.463242 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.463277 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.463304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.566812 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.566857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.566871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.566888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.566897 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.670725 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.670806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.670821 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.670844 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.670859 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.703712 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.703945 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.720952 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.774755 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.774820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.774838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.774865 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.774884 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.794220 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.794379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.794409 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.794478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.794498 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.814329 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.821088 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.821178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.821198 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.821221 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.821237 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.840903 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.846144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.846317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.846438 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.846566 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.846703 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.870043 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.876199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.876382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.876474 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.876585 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.876695 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.891570 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.897568 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.897826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.897919 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.898002 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.898079 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.915813 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-06T14:01:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"efa88f90-2f2b-4bd6-b8cc-4623e7e87b81\\\",\\\"systemUUID\\\":\\\"7374d6af-17bd-430d-99ca-aaf4c2e05545\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-06T14:01:04Z is after 2025-08-24T17:21:41Z" Jan 06 14:01:04 crc kubenswrapper[4869]: E0106 14:01:04.916004 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.918371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.918558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.918705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.918848 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:04 crc kubenswrapper[4869]: I0106 14:01:04.918954 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:04Z","lastTransitionTime":"2026-01-06T14:01:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.022635 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.022701 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.022712 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.022728 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.022737 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.126298 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.126921 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.127268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.127385 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.127470 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.230355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.230634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.230806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.230923 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.231006 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.333339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.333968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.333980 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.333997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.334010 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.436826 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.436866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.436875 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.436891 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.436900 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.539423 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.539472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.539481 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.539499 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.539512 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.605808 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.605857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.605887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.605913 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606018 4869 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606079 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.606060676 +0000 UTC m=+148.145748340 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606122 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606150 4869 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606163 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606157 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606188 4869 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606212 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.60619942 +0000 UTC m=+148.145887084 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606211 4869 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606242 4869 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606266 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.606241281 +0000 UTC m=+148.145928985 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.606309 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.606286812 +0000 UTC m=+148.145974516 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.644068 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.644300 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.644546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.644632 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.644865 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.703233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.703345 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.703501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.703547 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.703686 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.703729 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.706514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:01:05 crc kubenswrapper[4869]: E0106 14:01:05.706834 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.706818483 +0000 UTC m=+148.246506147 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.747981 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.747999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.748009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.748019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.748027 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.851055 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.851091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.851099 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.851111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.851121 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.953412 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.953450 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.953459 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.953472 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:05 crc kubenswrapper[4869]: I0106 14:01:05.953481 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:05Z","lastTransitionTime":"2026-01-06T14:01:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.057164 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.057195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.057203 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.057217 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.057226 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.159887 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.159945 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.159967 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.159996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.160017 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.263139 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.263440 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.263513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.263587 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.263653 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.365871 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.365916 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.365924 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.365954 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.365972 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.468495 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.468537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.468549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.468567 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.468582 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.570837 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.570878 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.570888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.570905 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.570915 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.675063 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.675155 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.675185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.675223 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.675250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.704075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:06 crc kubenswrapper[4869]: E0106 14:01:06.704591 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.779413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.779475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.779491 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.779513 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.779528 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.916996 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.917077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.917103 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.917141 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:06 crc kubenswrapper[4869]: I0106 14:01:06.917166 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:06Z","lastTransitionTime":"2026-01-06T14:01:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.019901 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.019944 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.019953 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.019970 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.019982 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.122977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.123053 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.123069 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.123098 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.123123 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.226019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.226085 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.226102 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.226127 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.226146 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.330178 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.330251 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.330268 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.330291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.330306 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.433995 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.434091 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.434116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.434153 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.434178 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.537822 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.537957 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.537978 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.538008 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.538027 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.640453 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.640519 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.640533 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.640551 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.640563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.703587 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.703711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.703587 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:07 crc kubenswrapper[4869]: E0106 14:01:07.703942 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:07 crc kubenswrapper[4869]: E0106 14:01:07.704061 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:07 crc kubenswrapper[4869]: E0106 14:01:07.704215 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.743332 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.743420 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.743448 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.743490 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.743529 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.847157 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.847218 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.847231 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.847249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.847608 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.951180 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.951249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.951265 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.951291 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:07 crc kubenswrapper[4869]: I0106 14:01:07.951308 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:07Z","lastTransitionTime":"2026-01-06T14:01:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.054468 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.054517 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.054529 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.054549 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.054563 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.157994 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.158508 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.158702 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.158828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.158992 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.261764 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.261814 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.261828 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.261853 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.261869 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.364813 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.364872 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.364888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.364913 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.364930 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.468185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.468257 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.468275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.468308 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.468327 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.570898 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.571372 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.571614 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.571778 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.571910 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.675262 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.675467 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.675507 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.675524 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.675536 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.703471 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:08 crc kubenswrapper[4869]: E0106 14:01:08.703920 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.779365 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.779424 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.779433 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.779478 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.779522 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.882432 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.882509 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.882532 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.882565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.882590 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.985171 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.985235 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.985250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.985275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:08 crc kubenswrapper[4869]: I0106 14:01:08.985291 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:08Z","lastTransitionTime":"2026-01-06T14:01:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.088294 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.088337 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.088346 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.088363 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.088375 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.190949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.191009 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.191019 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.191037 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.191049 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.294111 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.294188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.294207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.294238 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.294259 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.397825 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.397897 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.397922 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.397952 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.397981 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.501263 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.501355 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.501379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.501413 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.501438 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.604940 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.605028 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.605046 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.605077 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.605097 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.704285 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.704371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.704286 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:09 crc kubenswrapper[4869]: E0106 14:01:09.704543 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:09 crc kubenswrapper[4869]: E0106 14:01:09.704890 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:09 crc kubenswrapper[4869]: E0106 14:01:09.705037 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.708555 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.708590 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.708601 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.708618 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.708632 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.816207 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.816636 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.816715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.816805 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.816828 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.921344 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.921430 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.921451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.921482 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:09 crc kubenswrapper[4869]: I0106 14:01:09.921504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:09Z","lastTransitionTime":"2026-01-06T14:01:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.024573 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.024634 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.024655 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.024720 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.024741 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.126711 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.126759 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.126768 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.126783 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.126792 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.229523 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.229558 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.229565 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.229578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.229589 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.333914 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.333979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.333997 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.334025 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.334044 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.436593 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.436658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.436708 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.436740 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.436760 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.539314 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.539357 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.539366 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.539380 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.539390 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.642460 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.642512 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.642530 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.642552 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.642566 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.704390 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:10 crc kubenswrapper[4869]: E0106 14:01:10.704760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.719705 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.745902 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.745988 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.746014 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.746044 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.746062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.849838 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.850195 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.850290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.850397 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.850464 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.952761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.952795 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.952804 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.952823 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:10 crc kubenswrapper[4869]: I0106 14:01:10.952840 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:10Z","lastTransitionTime":"2026-01-06T14:01:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.055151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.055427 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.055492 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.055563 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.055636 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.158860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.159204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.159276 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.159345 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.159411 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.262751 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.262834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.262854 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.262882 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.262904 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.366125 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.366216 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.366240 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.366275 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.366304 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.470462 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.470511 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.470521 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.470537 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.470547 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.574208 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.574273 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.574290 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.574317 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.574336 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.678213 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.678293 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.678318 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.678352 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.678375 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.704008 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.704045 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.704151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:11 crc kubenswrapper[4869]: E0106 14:01:11.704343 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:11 crc kubenswrapper[4869]: E0106 14:01:11.704790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:11 crc kubenswrapper[4869]: E0106 14:01:11.704910 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.763263 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.763226268 podStartE2EDuration="7.763226268s" podCreationTimestamp="2026-01-06 14:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.761501638 +0000 UTC m=+90.301189362" watchObservedRunningTime="2026-01-06 14:01:11.763226268 +0000 UTC m=+90.302913972" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.783451 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.783525 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.783546 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.783578 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.783600 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.822247 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podStartSLOduration=72.822221931 podStartE2EDuration="1m12.822221931s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.807273955 +0000 UTC m=+90.346961659" watchObservedRunningTime="2026-01-06 14:01:11.822221931 +0000 UTC m=+90.361909605" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.822493 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vjd79" podStartSLOduration=71.822487047 podStartE2EDuration="1m11.822487047s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.8217232 +0000 UTC m=+90.361410924" watchObservedRunningTime="2026-01-06 14:01:11.822487047 +0000 UTC m=+90.362174731" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.852914 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tlkdn" podStartSLOduration=72.852894511 podStartE2EDuration="1m12.852894511s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.852837999 +0000 UTC m=+90.392525703" watchObservedRunningTime="2026-01-06 14:01:11.852894511 +0000 UTC m=+90.392582185" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.887188 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.887611 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.887834 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.887999 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.888115 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.904420 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-68bvk" podStartSLOduration=71.904404311 podStartE2EDuration="1m11.904404311s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.903849698 +0000 UTC m=+90.443537372" watchObservedRunningTime="2026-01-06 14:01:11.904404311 +0000 UTC m=+90.444091985" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.921627 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.921605899 podStartE2EDuration="43.921605899s" podCreationTimestamp="2026-01-06 14:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:11.92034602 +0000 UTC m=+90.460033724" watchObservedRunningTime="2026-01-06 14:01:11.921605899 +0000 UTC m=+90.461293563" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.990963 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.990998 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.991007 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.991021 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:11 crc kubenswrapper[4869]: I0106 14:01:11.991029 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:11Z","lastTransitionTime":"2026-01-06T14:01:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.024750 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.024717213 podStartE2EDuration="2.024717213s" podCreationTimestamp="2026-01-06 14:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:12.0051173 +0000 UTC m=+90.544804994" watchObservedRunningTime="2026-01-06 14:01:12.024717213 +0000 UTC m=+90.564404887" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.040509 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.040480937 podStartE2EDuration="1m11.040480937s" podCreationTimestamp="2026-01-06 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:12.024601799 +0000 UTC m=+90.564289513" watchObservedRunningTime="2026-01-06 14:01:12.040480937 +0000 UTC m=+90.580168621" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.070006 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4b8g7" podStartSLOduration=72.069983899 podStartE2EDuration="1m12.069983899s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:12.067925951 +0000 UTC m=+90.607613635" watchObservedRunningTime="2026-01-06 14:01:12.069983899 +0000 UTC m=+90.609671563" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.093281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.093339 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.093354 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.093379 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.093393 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.196628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.196716 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.196732 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.196758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.196775 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.299011 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.299092 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.299115 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.299144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.299166 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.401761 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.401806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.401820 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.401839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.401853 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.505808 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.505860 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.505874 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.505893 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.505907 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.608715 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.608774 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.608790 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.608809 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.608842 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.704322 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:12 crc kubenswrapper[4869]: E0106 14:01:12.704794 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.705035 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:01:12 crc kubenswrapper[4869]: E0106 14:01:12.705237 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.711817 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.711880 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.711892 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.711910 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.711945 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.737304 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-64qxs" podStartSLOduration=72.737282294 podStartE2EDuration="1m12.737282294s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:12.093928942 +0000 UTC m=+90.633616606" watchObservedRunningTime="2026-01-06 14:01:12.737282294 +0000 UTC m=+91.276969958" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.814196 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.814249 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.814264 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.814281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.814294 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.918135 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.918185 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.918199 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.918222 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:12 crc kubenswrapper[4869]: I0106 14:01:12.918233 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:12Z","lastTransitionTime":"2026-01-06T14:01:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.020323 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.020371 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.020382 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.020402 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.020415 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.124908 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.124961 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.124971 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.124991 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.125000 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.228762 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.228839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.228857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.228885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.228905 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.332832 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.332943 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.332968 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.333003 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.333027 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.435888 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.436057 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.436084 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.436118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.436142 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.540594 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.540659 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.540705 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.540737 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.540757 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.644780 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.644839 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.644857 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.644885 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.644904 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.703856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.703934 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:13 crc kubenswrapper[4869]: E0106 14:01:13.704251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.704281 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:13 crc kubenswrapper[4869]: E0106 14:01:13.704412 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:13 crc kubenswrapper[4869]: E0106 14:01:13.704535 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.748866 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.748929 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.748949 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.748979 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.749000 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.853052 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.853118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.853140 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.853173 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.853201 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.956789 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.956855 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.956881 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.956917 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:13 crc kubenswrapper[4869]: I0106 14:01:13.956943 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:13Z","lastTransitionTime":"2026-01-06T14:01:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.060518 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.060608 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.060627 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.060658 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.060719 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.171152 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.171229 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.171250 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.171281 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.171312 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.276360 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.276443 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.276466 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.276498 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.276531 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.380060 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.380112 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.380126 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.380144 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.380156 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.484045 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.484118 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.484131 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.484151 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.484162 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.587603 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.587681 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.587693 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.587717 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.587729 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.691395 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.691464 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.691475 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.691493 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.691504 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.703862 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:14 crc kubenswrapper[4869]: E0106 14:01:14.704066 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.795689 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.795747 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.795758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.795781 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.795806 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.899116 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.899181 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.899201 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.899230 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:14 crc kubenswrapper[4869]: I0106 14:01:14.899250 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:14Z","lastTransitionTime":"2026-01-06T14:01:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.002987 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.003026 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.003035 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.003051 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.003062 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:15Z","lastTransitionTime":"2026-01-06T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.106123 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.106174 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.106186 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.106204 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.106217 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:15Z","lastTransitionTime":"2026-01-06T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.209700 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.209758 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.209775 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.209806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.209825 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:15Z","lastTransitionTime":"2026-01-06T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.224628 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.224806 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.224890 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.224977 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.225008 4869 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-06T14:01:15Z","lastTransitionTime":"2026-01-06T14:01:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.298584 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r"] Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.299120 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.305447 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.306753 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.307367 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.307847 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.429383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d587b657-f7c0-4992-9adb-d62235828d5c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.429871 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.430000 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.430046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d587b657-f7c0-4992-9adb-d62235828d5c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.430100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d587b657-f7c0-4992-9adb-d62235828d5c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.530821 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d587b657-f7c0-4992-9adb-d62235828d5c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.530913 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d587b657-f7c0-4992-9adb-d62235828d5c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.530969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.531127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.531184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d587b657-f7c0-4992-9adb-d62235828d5c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.531365 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.531416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d587b657-f7c0-4992-9adb-d62235828d5c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.532895 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d587b657-f7c0-4992-9adb-d62235828d5c-service-ca\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.544918 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d587b657-f7c0-4992-9adb-d62235828d5c-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.570510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d587b657-f7c0-4992-9adb-d62235828d5c-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-krq2r\" (UID: \"d587b657-f7c0-4992-9adb-d62235828d5c\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.624693 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" Jan 06 14:01:15 crc kubenswrapper[4869]: W0106 14:01:15.645212 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd587b657_f7c0_4992_9adb_d62235828d5c.slice/crio-acd3a3e4e19363aab4a98fb07784af7b5fafc33bfc85f487bf1e4fcf231d9541 WatchSource:0}: Error finding container acd3a3e4e19363aab4a98fb07784af7b5fafc33bfc85f487bf1e4fcf231d9541: Status 404 returned error can't find the container with id acd3a3e4e19363aab4a98fb07784af7b5fafc33bfc85f487bf1e4fcf231d9541 Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.703927 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.703984 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:15 crc kubenswrapper[4869]: I0106 14:01:15.703939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:15 crc kubenswrapper[4869]: E0106 14:01:15.704119 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:15 crc kubenswrapper[4869]: E0106 14:01:15.704261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:15 crc kubenswrapper[4869]: E0106 14:01:15.704835 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:16 crc kubenswrapper[4869]: I0106 14:01:16.321122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" event={"ID":"d587b657-f7c0-4992-9adb-d62235828d5c","Type":"ContainerStarted","Data":"f6e6bd61de4bc7db02e24695ee8b8c02b0d5d1b8c687649f8f6a977be6b14cbf"} Jan 06 14:01:16 crc kubenswrapper[4869]: I0106 14:01:16.321246 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" event={"ID":"d587b657-f7c0-4992-9adb-d62235828d5c","Type":"ContainerStarted","Data":"acd3a3e4e19363aab4a98fb07784af7b5fafc33bfc85f487bf1e4fcf231d9541"} Jan 06 14:01:16 crc kubenswrapper[4869]: I0106 14:01:16.338366 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-krq2r" podStartSLOduration=76.338338959 podStartE2EDuration="1m16.338338959s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:16.338089883 +0000 UTC m=+94.877777547" watchObservedRunningTime="2026-01-06 14:01:16.338338959 +0000 UTC m=+94.878026623" Jan 06 14:01:16 crc kubenswrapper[4869]: I0106 14:01:16.703408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:16 crc kubenswrapper[4869]: E0106 14:01:16.703579 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:17 crc kubenswrapper[4869]: I0106 14:01:17.703776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:17 crc kubenswrapper[4869]: I0106 14:01:17.703908 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:17 crc kubenswrapper[4869]: I0106 14:01:17.703919 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:17 crc kubenswrapper[4869]: E0106 14:01:17.704118 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:17 crc kubenswrapper[4869]: E0106 14:01:17.704247 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:17 crc kubenswrapper[4869]: E0106 14:01:17.704570 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:17 crc kubenswrapper[4869]: I0106 14:01:17.721537 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 06 14:01:18 crc kubenswrapper[4869]: I0106 14:01:18.565097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:18 crc kubenswrapper[4869]: E0106 14:01:18.565382 4869 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:01:18 crc kubenswrapper[4869]: E0106 14:01:18.565529 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs podName:b86d961d-74c0-40cb-912d-ae0db79d97f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:02:22.565495964 +0000 UTC m=+161.105183638 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs") pod "network-metrics-daemon-mmdq4" (UID: "b86d961d-74c0-40cb-912d-ae0db79d97f2") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 06 14:01:18 crc kubenswrapper[4869]: I0106 14:01:18.703449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:18 crc kubenswrapper[4869]: E0106 14:01:18.703768 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:19 crc kubenswrapper[4869]: I0106 14:01:19.704386 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:19 crc kubenswrapper[4869]: I0106 14:01:19.704508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:19 crc kubenswrapper[4869]: E0106 14:01:19.704593 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:19 crc kubenswrapper[4869]: I0106 14:01:19.704751 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:19 crc kubenswrapper[4869]: E0106 14:01:19.704957 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:19 crc kubenswrapper[4869]: E0106 14:01:19.705159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:20 crc kubenswrapper[4869]: I0106 14:01:20.703812 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:20 crc kubenswrapper[4869]: E0106 14:01:20.704058 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:21 crc kubenswrapper[4869]: I0106 14:01:21.703888 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:21 crc kubenswrapper[4869]: I0106 14:01:21.703944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:21 crc kubenswrapper[4869]: I0106 14:01:21.704122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:21 crc kubenswrapper[4869]: E0106 14:01:21.708025 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:21 crc kubenswrapper[4869]: E0106 14:01:21.708358 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:21 crc kubenswrapper[4869]: E0106 14:01:21.708453 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:21 crc kubenswrapper[4869]: I0106 14:01:21.730780 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=4.730748925 podStartE2EDuration="4.730748925s" podCreationTimestamp="2026-01-06 14:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:21.729964686 +0000 UTC m=+100.269652400" watchObservedRunningTime="2026-01-06 14:01:21.730748925 +0000 UTC m=+100.270436629" Jan 06 14:01:22 crc kubenswrapper[4869]: I0106 14:01:22.703913 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:22 crc kubenswrapper[4869]: E0106 14:01:22.704196 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:23 crc kubenswrapper[4869]: I0106 14:01:23.703935 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:23 crc kubenswrapper[4869]: I0106 14:01:23.703987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:23 crc kubenswrapper[4869]: E0106 14:01:23.704242 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:23 crc kubenswrapper[4869]: E0106 14:01:23.704373 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:23 crc kubenswrapper[4869]: I0106 14:01:23.703987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:23 crc kubenswrapper[4869]: E0106 14:01:23.705121 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:24 crc kubenswrapper[4869]: I0106 14:01:24.704098 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:24 crc kubenswrapper[4869]: E0106 14:01:24.704412 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:24 crc kubenswrapper[4869]: I0106 14:01:24.705859 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:01:24 crc kubenswrapper[4869]: E0106 14:01:24.706180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:01:25 crc kubenswrapper[4869]: I0106 14:01:25.704403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:25 crc kubenswrapper[4869]: I0106 14:01:25.704489 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:25 crc kubenswrapper[4869]: E0106 14:01:25.705275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:25 crc kubenswrapper[4869]: I0106 14:01:25.704646 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:25 crc kubenswrapper[4869]: E0106 14:01:25.705475 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:25 crc kubenswrapper[4869]: E0106 14:01:25.705555 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:26 crc kubenswrapper[4869]: I0106 14:01:26.704283 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:26 crc kubenswrapper[4869]: E0106 14:01:26.704500 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:27 crc kubenswrapper[4869]: I0106 14:01:27.704085 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:27 crc kubenswrapper[4869]: I0106 14:01:27.704180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:27 crc kubenswrapper[4869]: E0106 14:01:27.704295 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:27 crc kubenswrapper[4869]: I0106 14:01:27.704340 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:27 crc kubenswrapper[4869]: E0106 14:01:27.704489 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:27 crc kubenswrapper[4869]: E0106 14:01:27.704623 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:28 crc kubenswrapper[4869]: I0106 14:01:28.704195 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:28 crc kubenswrapper[4869]: E0106 14:01:28.704372 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:29 crc kubenswrapper[4869]: I0106 14:01:29.703816 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:29 crc kubenswrapper[4869]: I0106 14:01:29.703902 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:29 crc kubenswrapper[4869]: E0106 14:01:29.703974 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:29 crc kubenswrapper[4869]: I0106 14:01:29.703851 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:29 crc kubenswrapper[4869]: E0106 14:01:29.704132 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:29 crc kubenswrapper[4869]: E0106 14:01:29.704222 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:30 crc kubenswrapper[4869]: I0106 14:01:30.704180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:30 crc kubenswrapper[4869]: E0106 14:01:30.704472 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:31 crc kubenswrapper[4869]: I0106 14:01:31.704500 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:31 crc kubenswrapper[4869]: I0106 14:01:31.704606 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:31 crc kubenswrapper[4869]: I0106 14:01:31.704779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:31 crc kubenswrapper[4869]: E0106 14:01:31.705461 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:31 crc kubenswrapper[4869]: E0106 14:01:31.705636 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:31 crc kubenswrapper[4869]: E0106 14:01:31.705732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:32 crc kubenswrapper[4869]: I0106 14:01:32.704176 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:32 crc kubenswrapper[4869]: E0106 14:01:32.704341 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:33 crc kubenswrapper[4869]: I0106 14:01:33.704618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:33 crc kubenswrapper[4869]: I0106 14:01:33.704728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:33 crc kubenswrapper[4869]: I0106 14:01:33.704743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:33 crc kubenswrapper[4869]: E0106 14:01:33.704886 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:33 crc kubenswrapper[4869]: E0106 14:01:33.705051 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:33 crc kubenswrapper[4869]: E0106 14:01:33.705264 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:34 crc kubenswrapper[4869]: I0106 14:01:34.703569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:34 crc kubenswrapper[4869]: E0106 14:01:34.704022 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:35 crc kubenswrapper[4869]: I0106 14:01:35.704401 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:35 crc kubenswrapper[4869]: I0106 14:01:35.704451 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:35 crc kubenswrapper[4869]: E0106 14:01:35.704564 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:35 crc kubenswrapper[4869]: I0106 14:01:35.704827 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:35 crc kubenswrapper[4869]: E0106 14:01:35.704903 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:35 crc kubenswrapper[4869]: E0106 14:01:35.705063 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.397091 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/1.log" Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.398117 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/0.log" Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.398216 4869 generic.go:334] "Generic (PLEG): container finished" podID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" containerID="4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da" exitCode=1 Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.398274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerDied","Data":"4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da"} Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.398352 4869 scope.go:117] "RemoveContainer" containerID="7a89f772d598b8ab3bae01a2629a8990d4dbcb7bacfe4d2b68d29675082fb724" Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.399632 4869 scope.go:117] "RemoveContainer" containerID="4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da" Jan 06 14:01:36 crc kubenswrapper[4869]: E0106 14:01:36.399987 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-68bvk_openshift-multus(e40cdd2b-5d24-4ef5-995a-4e09fc90d33c)\"" pod="openshift-multus/multus-68bvk" podUID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" Jan 06 14:01:36 crc kubenswrapper[4869]: I0106 14:01:36.703459 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:36 crc kubenswrapper[4869]: E0106 14:01:36.703614 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:37 crc kubenswrapper[4869]: I0106 14:01:37.405008 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/1.log" Jan 06 14:01:37 crc kubenswrapper[4869]: I0106 14:01:37.703791 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:37 crc kubenswrapper[4869]: I0106 14:01:37.703871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:37 crc kubenswrapper[4869]: E0106 14:01:37.703923 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:37 crc kubenswrapper[4869]: E0106 14:01:37.704117 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:37 crc kubenswrapper[4869]: I0106 14:01:37.704169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:37 crc kubenswrapper[4869]: E0106 14:01:37.704227 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:37 crc kubenswrapper[4869]: I0106 14:01:37.705005 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:01:37 crc kubenswrapper[4869]: E0106 14:01:37.705188 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2f9tq_openshift-ovn-kubernetes(487c527a-7d89-4175-8827-c8cdd6e0211f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" Jan 06 14:01:38 crc kubenswrapper[4869]: I0106 14:01:38.703406 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:38 crc kubenswrapper[4869]: E0106 14:01:38.703725 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:39 crc kubenswrapper[4869]: I0106 14:01:39.703977 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:39 crc kubenswrapper[4869]: E0106 14:01:39.704982 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:39 crc kubenswrapper[4869]: I0106 14:01:39.705024 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:39 crc kubenswrapper[4869]: I0106 14:01:39.705075 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:39 crc kubenswrapper[4869]: E0106 14:01:39.705142 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:39 crc kubenswrapper[4869]: E0106 14:01:39.705206 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:40 crc kubenswrapper[4869]: I0106 14:01:40.703650 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:40 crc kubenswrapper[4869]: E0106 14:01:40.704387 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:41 crc kubenswrapper[4869]: I0106 14:01:41.704325 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:41 crc kubenswrapper[4869]: I0106 14:01:41.704396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:41 crc kubenswrapper[4869]: E0106 14:01:41.706882 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:41 crc kubenswrapper[4869]: I0106 14:01:41.706938 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:41 crc kubenswrapper[4869]: E0106 14:01:41.707781 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:41 crc kubenswrapper[4869]: E0106 14:01:41.707910 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:41 crc kubenswrapper[4869]: E0106 14:01:41.709594 4869 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 06 14:01:41 crc kubenswrapper[4869]: E0106 14:01:41.810628 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 06 14:01:42 crc kubenswrapper[4869]: I0106 14:01:42.704444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:42 crc kubenswrapper[4869]: E0106 14:01:42.705116 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:43 crc kubenswrapper[4869]: I0106 14:01:43.704078 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:43 crc kubenswrapper[4869]: I0106 14:01:43.704093 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:43 crc kubenswrapper[4869]: I0106 14:01:43.704939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:43 crc kubenswrapper[4869]: E0106 14:01:43.705140 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:43 crc kubenswrapper[4869]: E0106 14:01:43.705783 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:43 crc kubenswrapper[4869]: E0106 14:01:43.705894 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:44 crc kubenswrapper[4869]: I0106 14:01:44.704213 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:44 crc kubenswrapper[4869]: E0106 14:01:44.704479 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:45 crc kubenswrapper[4869]: I0106 14:01:45.703506 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:45 crc kubenswrapper[4869]: I0106 14:01:45.703596 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:45 crc kubenswrapper[4869]: E0106 14:01:45.703767 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:45 crc kubenswrapper[4869]: I0106 14:01:45.703861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:45 crc kubenswrapper[4869]: E0106 14:01:45.704092 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:45 crc kubenswrapper[4869]: E0106 14:01:45.704218 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:46 crc kubenswrapper[4869]: I0106 14:01:46.703952 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:46 crc kubenswrapper[4869]: E0106 14:01:46.704163 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:46 crc kubenswrapper[4869]: E0106 14:01:46.812601 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 06 14:01:47 crc kubenswrapper[4869]: I0106 14:01:47.703508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:47 crc kubenswrapper[4869]: I0106 14:01:47.703616 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:47 crc kubenswrapper[4869]: E0106 14:01:47.703651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:47 crc kubenswrapper[4869]: E0106 14:01:47.703855 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:47 crc kubenswrapper[4869]: I0106 14:01:47.703918 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:47 crc kubenswrapper[4869]: E0106 14:01:47.704034 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:48 crc kubenswrapper[4869]: I0106 14:01:48.704250 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:48 crc kubenswrapper[4869]: I0106 14:01:48.704771 4869 scope.go:117] "RemoveContainer" containerID="4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da" Jan 06 14:01:48 crc kubenswrapper[4869]: E0106 14:01:48.705597 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.450326 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/1.log" Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.450877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerStarted","Data":"28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6"} Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.704711 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.704704 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.704716 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:49 crc kubenswrapper[4869]: I0106 14:01:49.705239 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:01:49 crc kubenswrapper[4869]: E0106 14:01:49.705327 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:49 crc kubenswrapper[4869]: E0106 14:01:49.705406 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:49 crc kubenswrapper[4869]: E0106 14:01:49.705554 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.459171 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/3.log" Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.464314 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerStarted","Data":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.464936 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.500599 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podStartSLOduration=110.500571401 podStartE2EDuration="1m50.500571401s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:01:50.498952263 +0000 UTC m=+129.038639957" watchObservedRunningTime="2026-01-06 14:01:50.500571401 +0000 UTC m=+129.040259105" Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.511160 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mmdq4"] Jan 06 14:01:50 crc kubenswrapper[4869]: I0106 14:01:50.511359 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:50 crc kubenswrapper[4869]: E0106 14:01:50.511527 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:51 crc kubenswrapper[4869]: I0106 14:01:51.703379 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:51 crc kubenswrapper[4869]: I0106 14:01:51.703385 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:51 crc kubenswrapper[4869]: I0106 14:01:51.703403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:51 crc kubenswrapper[4869]: E0106 14:01:51.705271 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:51 crc kubenswrapper[4869]: E0106 14:01:51.705331 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:51 crc kubenswrapper[4869]: E0106 14:01:51.705431 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:51 crc kubenswrapper[4869]: E0106 14:01:51.814797 4869 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 06 14:01:52 crc kubenswrapper[4869]: I0106 14:01:52.703795 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:52 crc kubenswrapper[4869]: E0106 14:01:52.704267 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:53 crc kubenswrapper[4869]: I0106 14:01:53.703617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:53 crc kubenswrapper[4869]: I0106 14:01:53.703632 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:53 crc kubenswrapper[4869]: E0106 14:01:53.703900 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:53 crc kubenswrapper[4869]: E0106 14:01:53.703996 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:53 crc kubenswrapper[4869]: I0106 14:01:53.704585 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:53 crc kubenswrapper[4869]: E0106 14:01:53.705906 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:54 crc kubenswrapper[4869]: I0106 14:01:54.703728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:54 crc kubenswrapper[4869]: E0106 14:01:54.703891 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:55 crc kubenswrapper[4869]: I0106 14:01:55.704252 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:55 crc kubenswrapper[4869]: I0106 14:01:55.704256 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:55 crc kubenswrapper[4869]: E0106 14:01:55.704440 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 06 14:01:55 crc kubenswrapper[4869]: E0106 14:01:55.704534 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 06 14:01:55 crc kubenswrapper[4869]: I0106 14:01:55.704805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:55 crc kubenswrapper[4869]: E0106 14:01:55.704972 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 06 14:01:56 crc kubenswrapper[4869]: I0106 14:01:56.704175 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:56 crc kubenswrapper[4869]: E0106 14:01:56.704445 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mmdq4" podUID="b86d961d-74c0-40cb-912d-ae0db79d97f2" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.704034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.704823 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.704934 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.707294 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.707934 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.710085 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 06 14:01:57 crc kubenswrapper[4869]: I0106 14:01:57.710106 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 06 14:01:58 crc kubenswrapper[4869]: I0106 14:01:58.704261 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:01:58 crc kubenswrapper[4869]: I0106 14:01:58.707417 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 06 14:01:58 crc kubenswrapper[4869]: I0106 14:01:58.707648 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 06 14:02:03 crc kubenswrapper[4869]: I0106 14:02:03.622731 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:02:03 crc kubenswrapper[4869]: I0106 14:02:03.622792 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.421757 4869 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.477908 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.478571 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mn2d"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.478649 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.479444 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.484199 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.484718 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.484870 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.486372 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8t96r"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.486957 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr849"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.487159 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.487707 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.488044 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.488964 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-vx9gs"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.489361 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.489954 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.490432 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.490928 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.491499 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vh62x"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.491581 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.491946 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.491970 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.492135 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.492485 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9zlg"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.492959 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.495391 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.496029 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.496348 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.496729 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.497245 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.497559 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.499386 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.500068 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hgpcv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.500526 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.500700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.501154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.501395 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.501867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.501970 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.533889 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.533959 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534026 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534173 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534201 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534322 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534331 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534399 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534422 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534442 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534479 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534603 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.534858 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.559624 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.559918 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.559970 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560083 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560180 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560215 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560224 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560262 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560183 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.560352 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.570902 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4sgbs"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.571861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.573384 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.573863 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.574144 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.574294 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.574447 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.574838 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575013 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575174 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575328 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575690 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575695 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.575865 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576014 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576122 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576266 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576294 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576489 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576642 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.576939 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.583028 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.588896 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.616229 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.624791 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.625066 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.625220 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.625363 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.625506 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.627418 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.628034 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.640006 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.640327 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.640912 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641189 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641229 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641519 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641608 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641856 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.641918 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.642215 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.642420 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.642971 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643019 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643138 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643248 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643293 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643365 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643419 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643250 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643504 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643733 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643884 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643901 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.643977 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9bkv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.644729 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.644788 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.645416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.646882 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647129 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647275 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647414 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647525 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647638 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647814 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.647884 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.648258 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.648545 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.652431 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.655904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-node-pullsecrets\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.655942 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-encryption-config\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.655965 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-service-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.655982 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dtb6\" (UniqueName: \"kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.655997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-config\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656067 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-image-import-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fb1714-50f4-4504-8912-1b0ed4fb508e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656144 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6p7\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-kube-api-access-2j6p7\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656178 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f26c6409-5ba8-4b46-bb01-9a038091cdfd-metrics-tls\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656216 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/f26c6409-5ba8-4b46-bb01-9a038091cdfd-kube-api-access-zrlnp\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656234 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/34125ddb-6d12-42f3-9759-ba14a484f117-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-config\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656280 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656294 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656317 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656335 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e49b04-3d85-4323-931e-d0d341d52650-serving-cert\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit-dir\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656368 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-images\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-client\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656418 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lz2w\" (UniqueName: \"kubernetes.io/projected/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-kube-api-access-2lz2w\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656449 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-client\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxm6\" (UniqueName: \"kubernetes.io/projected/96e8a661-1f08-489b-afcb-18f86bf6d4e3-kube-api-access-jxxm6\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656478 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs65\" (UniqueName: \"kubernetes.io/projected/7462c7be-1f9d-4f4b-a844-71a3518a27e2-kube-api-access-fvs65\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656498 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-encryption-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656550 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67kw\" (UniqueName: \"kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656566 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-serving-cert\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fb99\" (UniqueName: \"kubernetes.io/projected/fcc80584-0b81-45b0-a790-539bfc78c894-kube-api-access-2fb99\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnnv\" (UniqueName: \"kubernetes.io/projected/f1d294f9-a755-49bc-bc10-5b4e9739a914-kube-api-access-zrnnv\") pod \"downloads-7954f5f757-vx9gs\" (UID: \"f1d294f9-a755-49bc-bc10-5b4e9739a914\") " pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656610 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34125ddb-6d12-42f3-9759-ba14a484f117-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656627 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ps4\" (UniqueName: \"kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656643 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-config\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jklh5\" (UniqueName: \"kubernetes.io/projected/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-kube-api-access-jklh5\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656707 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fb1714-50f4-4504-8912-1b0ed4fb508e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55zvk\" (UniqueName: \"kubernetes.io/projected/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-kube-api-access-55zvk\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656737 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkzt\" (UniqueName: \"kubernetes.io/projected/70fb1714-50f4-4504-8912-1b0ed4fb508e-kube-api-access-2wkzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656752 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656767 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62309e3d-7bdc-4573-8a0d-5b485f618ffe-serving-cert\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjhhr\" (UniqueName: \"kubernetes.io/projected/62309e3d-7bdc-4573-8a0d-5b485f618ffe-kube-api-access-jjhhr\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656853 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-service-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656867 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrlb\" (UniqueName: \"kubernetes.io/projected/ab544c1b-884d-47a9-9e75-b133b58ca4db-kube-api-access-gqrlb\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-dir\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656898 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7rwm\" (UniqueName: \"kubernetes.io/projected/49e49b04-3d85-4323-931e-d0d341d52650-kube-api-access-q7rwm\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656912 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-machine-approver-tls\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-trusted-ca\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656938 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.656942 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-serving-cert\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657331 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-auth-proxy-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-serving-cert\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657373 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-serving-cert\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657392 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657409 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7462c7be-1f9d-4f4b-a844-71a3518a27e2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657423 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-policies\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657450 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-config\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.657463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-client\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658619 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658741 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658771 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658818 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658892 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.658970 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.659105 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.659429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.659649 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.664242 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.664529 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l65qs"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.665041 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.661880 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.665462 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.665617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.662079 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.663201 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.667683 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.670884 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.671897 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.672307 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.672837 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.673122 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.673193 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.674997 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.681028 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9zcbm"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.681719 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.691184 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.691895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.696909 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.697768 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.701225 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.712013 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.719098 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.720848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.722344 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.726611 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.726718 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.727349 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.730045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.734737 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.735352 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.737513 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.737830 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.738593 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.739331 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.739848 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.742762 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.744169 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kznks"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.745501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.754868 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.755510 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmjgl"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.755980 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.756280 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.757937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs65\" (UniqueName: \"kubernetes.io/projected/7462c7be-1f9d-4f4b-a844-71a3518a27e2-kube-api-access-fvs65\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.757978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.758014 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.758036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-encryption-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.758070 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z67kw\" (UniqueName: \"kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.758105 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-serving-cert\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ps4\" (UniqueName: \"kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fb99\" (UniqueName: \"kubernetes.io/projected/fcc80584-0b81-45b0-a790-539bfc78c894-kube-api-access-2fb99\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnnv\" (UniqueName: \"kubernetes.io/projected/f1d294f9-a755-49bc-bc10-5b4e9739a914-kube-api-access-zrnnv\") pod \"downloads-7954f5f757-vx9gs\" (UID: \"f1d294f9-a755-49bc-bc10-5b4e9739a914\") " pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34125ddb-6d12-42f3-9759-ba14a484f117-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760210 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-config\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760224 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jklh5\" (UniqueName: \"kubernetes.io/projected/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-kube-api-access-jklh5\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760265 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55zvk\" (UniqueName: \"kubernetes.io/projected/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-kube-api-access-55zvk\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760284 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fb1714-50f4-4504-8912-1b0ed4fb508e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760332 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkzt\" (UniqueName: \"kubernetes.io/projected/70fb1714-50f4-4504-8912-1b0ed4fb508e-kube-api-access-2wkzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760348 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760408 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqrlb\" (UniqueName: \"kubernetes.io/projected/ab544c1b-884d-47a9-9e75-b133b58ca4db-kube-api-access-gqrlb\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-dir\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760453 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62309e3d-7bdc-4573-8a0d-5b485f618ffe-serving-cert\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjhhr\" (UniqueName: \"kubernetes.io/projected/62309e3d-7bdc-4573-8a0d-5b485f618ffe-kube-api-access-jjhhr\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-service-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760506 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7rwm\" (UniqueName: \"kubernetes.io/projected/49e49b04-3d85-4323-931e-d0d341d52650-kube-api-access-q7rwm\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760524 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-trusted-ca\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760538 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-serving-cert\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760554 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-machine-approver-tls\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-auth-proxy-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760602 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760635 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-serving-cert\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760691 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-serving-cert\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-config\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-client\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7462c7be-1f9d-4f4b-a844-71a3518a27e2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-policies\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.760773 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-node-pullsecrets\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-encryption-config\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dtb6\" (UniqueName: \"kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761426 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-service-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761447 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-config\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761467 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761493 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761512 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-image-import-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761528 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fb1714-50f4-4504-8912-1b0ed4fb508e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761621 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j6p7\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-kube-api-access-2j6p7\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f26c6409-5ba8-4b46-bb01-9a038091cdfd-metrics-tls\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761692 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761718 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/f26c6409-5ba8-4b46-bb01-9a038091cdfd-kube-api-access-zrlnp\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761763 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/34125ddb-6d12-42f3-9759-ba14a484f117-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-config\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761799 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e49b04-3d85-4323-931e-d0d341d52650-serving-cert\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761875 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit-dir\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761892 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761910 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lz2w\" (UniqueName: \"kubernetes.io/projected/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-kube-api-access-2lz2w\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761927 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761944 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-images\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761965 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-client\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.762006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-client\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.762025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxm6\" (UniqueName: \"kubernetes.io/projected/96e8a661-1f08-489b-afcb-18f86bf6d4e3-kube-api-access-jxxm6\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.762284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70fb1714-50f4-4504-8912-1b0ed4fb508e-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.762787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.763570 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.763766 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-dir\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.763902 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.764172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.763320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34125ddb-6d12-42f3-9759-ba14a484f117-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.764389 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-config\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.764677 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-node-pullsecrets\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.765298 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-auth-proxy-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.765327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-service-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.765642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.765910 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vx9gs"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.765982 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-serving-cert\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766011 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-config\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-config\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766546 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-audit-policies\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.766820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-serving-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.767024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49e49b04-3d85-4323-931e-d0d341d52650-trusted-ca\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.767356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.767536 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.767612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.767987 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.768220 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ab544c1b-884d-47a9-9e75-b133b58ca4db-audit-dir\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.768558 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e8a661-1f08-489b-afcb-18f86bf6d4e3-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.768655 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ab544c1b-884d-47a9-9e75-b133b58ca4db-image-import-ca\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.768690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7462c7be-1f9d-4f4b-a844-71a3518a27e2-images\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.768930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.761720 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.769556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-service-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.769615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.769695 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.769898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.770245 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62309e3d-7bdc-4573-8a0d-5b485f618ffe-config\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.770921 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.770958 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-ca\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.771489 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.771545 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-config\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.771833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.771881 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mn2d"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.772912 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vh62x"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.774474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.774802 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.775121 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8t96r"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.775464 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f26c6409-5ba8-4b46-bb01-9a038091cdfd-metrics-tls\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.775754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-encryption-config\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.775971 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/34125ddb-6d12-42f3-9759-ba14a484f117-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.776274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7462c7be-1f9d-4f4b-a844-71a3518a27e2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.776390 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49e49b04-3d85-4323-931e-d0d341d52650-serving-cert\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.776564 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-serving-cert\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.776737 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-serving-cert\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.777128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-etcd-client\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.777215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62309e3d-7bdc-4573-8a0d-5b485f618ffe-serving-cert\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.777291 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fcc80584-0b81-45b0-a790-539bfc78c894-etcd-client\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.777318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-serving-cert\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.777412 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70fb1714-50f4-4504-8912-1b0ed4fb508e-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.778055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-machine-approver-tls\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.778969 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.780870 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.782190 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.783349 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.784490 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.785705 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hdc42"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.786432 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.786795 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.786806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96e8a661-1f08-489b-afcb-18f86bf6d4e3-etcd-client\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.788056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ab544c1b-884d-47a9-9e75-b133b58ca4db-encryption-config\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.788381 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-64v7m"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.788800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.789012 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.790727 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.792037 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.793773 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.794212 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.795126 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.796132 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.797043 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.798000 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.799255 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9zlg"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.800429 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.801232 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hgpcv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.802358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.802535 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.803400 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr849"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.805270 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.818256 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.819415 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l65qs"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.821138 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.821561 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.822405 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9zcbm"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.823704 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.825001 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.826059 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.827109 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.828174 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.829231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmjgl"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.831575 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9bkv"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.832416 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kznks"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.833648 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pnb6r"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.834873 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hdc42"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.834973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.835943 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-64v7m"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.837758 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.838633 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pnb6r"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.839958 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-svdhb"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.840473 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.840518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.841098 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq"] Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.861146 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.881267 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.901294 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.921165 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.941036 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 06 14:02:06 crc kubenswrapper[4869]: I0106 14:02:06.961184 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.001102 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.021245 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.041433 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.061376 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.081091 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.102486 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.121005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.140395 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.162850 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.181712 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.201448 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.220899 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.241054 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.266820 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.280602 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.300795 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.320366 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.340709 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.361024 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.381323 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.401844 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.422276 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.440453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.461144 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.483091 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.501069 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.521736 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.540994 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.561591 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.601475 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.621393 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.640559 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.661045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.681531 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.702716 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.721510 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.741239 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.758954 4869 request.go:700] Waited for 1.018754502s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&limit=500&resourceVersion=0 Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.761409 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.782309 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.801697 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.822502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.841984 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.861172 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.889240 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.902370 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.934117 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.941256 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.962933 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 06 14:02:07 crc kubenswrapper[4869]: I0106 14:02:07.981222 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.017083 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.029775 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.041225 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.061384 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.081512 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.101598 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.122487 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.141852 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.160887 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.181234 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.201878 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.221248 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.262899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs65\" (UniqueName: \"kubernetes.io/projected/7462c7be-1f9d-4f4b-a844-71a3518a27e2-kube-api-access-fvs65\") pod \"machine-api-operator-5694c8668f-8t96r\" (UID: \"7462c7be-1f9d-4f4b-a844-71a3518a27e2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.283686 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z67kw\" (UniqueName: \"kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw\") pod \"route-controller-manager-6576b87f9c-bm7df\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.299299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jklh5\" (UniqueName: \"kubernetes.io/projected/8e5dcd19-170b-4d3a-b1f2-995f97fdad41-kube-api-access-jklh5\") pod \"openshift-config-operator-7777fb866f-dgtcf\" (UID: \"8e5dcd19-170b-4d3a-b1f2-995f97fdad41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.318509 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ps4\" (UniqueName: \"kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4\") pod \"controller-manager-879f6c89f-dgxjm\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.334533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.347265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fb99\" (UniqueName: \"kubernetes.io/projected/fcc80584-0b81-45b0-a790-539bfc78c894-kube-api-access-2fb99\") pod \"etcd-operator-b45778765-vh62x\" (UID: \"fcc80584-0b81-45b0-a790-539bfc78c894\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.359041 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnnv\" (UniqueName: \"kubernetes.io/projected/f1d294f9-a755-49bc-bc10-5b4e9739a914-kube-api-access-zrnnv\") pod \"downloads-7954f5f757-vx9gs\" (UID: \"f1d294f9-a755-49bc-bc10-5b4e9739a914\") " pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.373735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.376386 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55zvk\" (UniqueName: \"kubernetes.io/projected/be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad-kube-api-access-55zvk\") pod \"machine-approver-56656f9798-sc7mj\" (UID: \"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.389375 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.398853 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxm6\" (UniqueName: \"kubernetes.io/projected/96e8a661-1f08-489b-afcb-18f86bf6d4e3-kube-api-access-jxxm6\") pod \"apiserver-7bbb656c7d-wzhmf\" (UID: \"96e8a661-1f08-489b-afcb-18f86bf6d4e3\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.418353 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkzt\" (UniqueName: \"kubernetes.io/projected/70fb1714-50f4-4504-8912-1b0ed4fb508e-kube-api-access-2wkzt\") pod \"openshift-controller-manager-operator-756b6f6bc6-pgftz\" (UID: \"70fb1714-50f4-4504-8912-1b0ed4fb508e\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.435051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjhhr\" (UniqueName: \"kubernetes.io/projected/62309e3d-7bdc-4573-8a0d-5b485f618ffe-kube-api-access-jjhhr\") pod \"authentication-operator-69f744f599-hgpcv\" (UID: \"62309e3d-7bdc-4573-8a0d-5b485f618ffe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.435683 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.464775 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqrlb\" (UniqueName: \"kubernetes.io/projected/ab544c1b-884d-47a9-9e75-b133b58ca4db-kube-api-access-gqrlb\") pod \"apiserver-76f77b778f-qr849\" (UID: \"ab544c1b-884d-47a9-9e75-b133b58ca4db\") " pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.477645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.480086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j6p7\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-kube-api-access-2j6p7\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.495383 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.503108 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.504458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7rwm\" (UniqueName: \"kubernetes.io/projected/49e49b04-3d85-4323-931e-d0d341d52650-kube-api-access-q7rwm\") pod \"console-operator-58897d9998-6mn2d\" (UID: \"49e49b04-3d85-4323-931e-d0d341d52650\") " pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.512803 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.526132 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.541640 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.552965 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.558647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dtb6\" (UniqueName: \"kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6\") pod \"console-f9d7485db-b9gld\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:08 crc kubenswrapper[4869]: W0106 14:02:08.559565 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe5c9ed4_3ad3_4db5_89f2_0eb5f4e4e4ad.slice/crio-12d327a876abe8e2ba95c157bdd69ab12128144d2f3fa5bc5549cd21e560ab90 WatchSource:0}: Error finding container 12d327a876abe8e2ba95c157bdd69ab12128144d2f3fa5bc5549cd21e560ab90: Status 404 returned error can't find the container with id 12d327a876abe8e2ba95c157bdd69ab12128144d2f3fa5bc5549cd21e560ab90 Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.583072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrlnp\" (UniqueName: \"kubernetes.io/projected/f26c6409-5ba8-4b46-bb01-9a038091cdfd-kube-api-access-zrlnp\") pod \"dns-operator-744455d44c-d9zlg\" (UID: \"f26c6409-5ba8-4b46-bb01-9a038091cdfd\") " pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.600579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lz2w\" (UniqueName: \"kubernetes.io/projected/0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399-kube-api-access-2lz2w\") pod \"cluster-samples-operator-665b6dd947-7tlrk\" (UID: \"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.612994 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.619883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34125ddb-6d12-42f3-9759-ba14a484f117-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xv5r2\" (UID: \"34125ddb-6d12-42f3-9759-ba14a484f117\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.622370 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.624734 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.646388 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.647945 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.661596 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.666610 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-8t96r"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.680763 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-vx9gs"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.681720 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.702202 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 06 14:02:08 crc kubenswrapper[4869]: W0106 14:02:08.716594 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1d294f9_a755_49bc_bc10_5b4e9739a914.slice/crio-4181f22b90aedb6e1d5c9342a130f0543200f9bd5478c23036adf62410d0437d WatchSource:0}: Error finding container 4181f22b90aedb6e1d5c9342a130f0543200f9bd5478c23036adf62410d0437d: Status 404 returned error can't find the container with id 4181f22b90aedb6e1d5c9342a130f0543200f9bd5478c23036adf62410d0437d Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.717532 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:08 crc kubenswrapper[4869]: W0106 14:02:08.717626 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7462c7be_1f9d_4f4b_a844_71a3518a27e2.slice/crio-5c3c1e4011abea44aa97644cc0134e9719636264c420a2b247f1b9c8259470eb WatchSource:0}: Error finding container 5c3c1e4011abea44aa97644cc0134e9719636264c420a2b247f1b9c8259470eb: Status 404 returned error can't find the container with id 5c3c1e4011abea44aa97644cc0134e9719636264c420a2b247f1b9c8259470eb Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.723192 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.742278 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.748948 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.758955 4869 request.go:700] Waited for 1.923606832s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.762158 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.766103 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.782017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.805798 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.819292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.823207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.826532 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vh62x"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.841550 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 06 14:02:08 crc kubenswrapper[4869]: W0106 14:02:08.842311 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc80584_0b81_45b0_a790_539bfc78c894.slice/crio-4fb4438db1e286b50cbe4dbaf21765273f73eefd4c602f1dfd76d505b34f4083 WatchSource:0}: Error finding container 4fb4438db1e286b50cbe4dbaf21765273f73eefd4c602f1dfd76d505b34f4083: Status 404 returned error can't find the container with id 4fb4438db1e286b50cbe4dbaf21765273f73eefd4c602f1dfd76d505b34f4083 Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.861449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.862234 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.881923 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891011 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d98f64-908a-4500-aec4-8542ebf281d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891052 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a99257-541b-4b05-bdef-21c591879b90-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891092 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nswc\" (UniqueName: \"kubernetes.io/projected/7a5395fe-a04c-4913-a749-f7316689b418-kube-api-access-2nswc\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891178 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-stats-auth\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891268 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d98f64-908a-4500-aec4-8542ebf281d3-config\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88xzz\" (UniqueName: \"kubernetes.io/projected/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-kube-api-access-88xzz\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-metrics-certs\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a99257-541b-4b05-bdef-21c591879b90-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891576 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-default-certificate\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891688 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538d7a4a-0270-4948-a67f-69f1d297f371-service-ca-bundle\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891727 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5395fe-a04c-4913-a749-f7316689b418-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8zb\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891856 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5395fe-a04c-4913-a749-f7316689b418-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891907 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzljj\" (UniqueName: \"kubernetes.io/projected/18a99257-541b-4b05-bdef-21c591879b90-kube-api-access-xzljj\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.891985 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd5f8\" (UniqueName: \"kubernetes.io/projected/538d7a4a-0270-4948-a67f-69f1d297f371-kube-api-access-xd5f8\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.892098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d98f64-908a-4500-aec4-8542ebf281d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:08 crc kubenswrapper[4869]: E0106 14:02:08.893047 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.393026233 +0000 UTC m=+147.932713987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.914839 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hgpcv"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.950156 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf"] Jan 06 14:02:08 crc kubenswrapper[4869]: I0106 14:02:08.977579 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002363 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002723 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzljj\" (UniqueName: \"kubernetes.io/projected/18a99257-541b-4b05-bdef-21c591879b90-kube-api-access-xzljj\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002752 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e1ad4f-a43f-46c7-8fca-75a84adac372-config\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002839 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-csi-data-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002864 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-socket-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002896 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-srv-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002922 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kg9\" (UniqueName: \"kubernetes.io/projected/25bd3d1b-ff4a-4369-af67-dea3889d9db3-kube-api-access-n7kg9\") pod \"migrator-59844c95c7-vg6sr\" (UID: \"25bd3d1b-ff4a-4369-af67-dea3889d9db3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-plugins-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8156833-621a-414d-9aab-83b8bceb2d09-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.002994 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqhh\" (UniqueName: \"kubernetes.io/projected/2bef3e32-812d-4ced-ab0d-440c1f7c535d-kube-api-access-grqhh\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nswc\" (UniqueName: \"kubernetes.io/projected/7a5395fe-a04c-4913-a749-f7316689b418-kube-api-access-2nswc\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6g6\" (UniqueName: \"kubernetes.io/projected/e8156833-621a-414d-9aab-83b8bceb2d09-kube-api-access-qb6g6\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003129 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d98f64-908a-4500-aec4-8542ebf281d3-config\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003149 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-certs\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003170 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qckb\" (UniqueName: \"kubernetes.io/projected/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-kube-api-access-2qckb\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-apiservice-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003286 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a76d0f8-e02b-494e-849d-31a85ff80297-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003308 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003351 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e1ad4f-a43f-46c7-8fca-75a84adac372-serving-cert\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003372 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003607 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88xzz\" (UniqueName: \"kubernetes.io/projected/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-kube-api-access-88xzz\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzmkd\" (UniqueName: \"kubernetes.io/projected/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-kube-api-access-bzmkd\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003723 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003774 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ac97f9-f168-4470-a3dd-4097a7a4abc9-cert\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c265449-14f8-4b89-b50c-7889b5d41c64-metrics-tls\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.003842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.004734 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.504713005 +0000 UTC m=+148.044400679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.004906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538d7a4a-0270-4948-a67f-69f1d297f371-service-ca-bundle\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.004939 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-node-bootstrap-token\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/538d7a4a-0270-4948-a67f-69f1d297f371-service-ca-bundle\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5bv\" (UniqueName: \"kubernetes.io/projected/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-kube-api-access-hj5bv\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006454 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhq7q\" (UniqueName: \"kubernetes.io/projected/0a76d0f8-e02b-494e-849d-31a85ff80297-kube-api-access-dhq7q\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006552 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5395fe-a04c-4913-a749-f7316689b418-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c265449-14f8-4b89-b50c-7889b5d41c64-config-volume\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006593 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d98f64-908a-4500-aec4-8542ebf281d3-config\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006743 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006789 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjpd5\" (UniqueName: \"kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.006798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007150 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhvn\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-kube-api-access-rnhvn\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007200 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd5f8\" (UniqueName: \"kubernetes.io/projected/538d7a4a-0270-4948-a67f-69f1d297f371-kube-api-access-xd5f8\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8n77\" (UniqueName: \"kubernetes.io/projected/b8e1ad4f-a43f-46c7-8fca-75a84adac372-kube-api-access-m8n77\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d98f64-908a-4500-aec4-8542ebf281d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007589 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxq59\" (UniqueName: \"kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007865 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d98f64-908a-4500-aec4-8542ebf281d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-config\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007934 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007956 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.007981 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a99257-541b-4b05-bdef-21c591879b90-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-stats-auth\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-registration-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008145 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41b0b15a-6333-48e5-8111-90e0dbe246c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008199 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsdv\" (UniqueName: \"kubernetes.io/projected/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-kube-api-access-4bsdv\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008261 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-proxy-tls\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008297 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008324 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41b0b15a-6333-48e5-8111-90e0dbe246c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008347 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbqs\" (UniqueName: \"kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.008551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-webhook-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009252 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-tmpfs\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a76d0f8-e02b-494e-849d-31a85ff80297-proxy-tls\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tt5x\" (UniqueName: \"kubernetes.io/projected/6aace82b-ec31-40e1-808f-f06962fb0bd4-kube-api-access-4tt5x\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009388 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-images\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-cabundle\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-srv-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.009974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-metrics-certs\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.010017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.011396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxr64\" (UniqueName: \"kubernetes.io/projected/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-kube-api-access-sxr64\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.011558 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a99257-541b-4b05-bdef-21c591879b90-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.011716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.012968 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-default-certificate\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013023 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-mountpoint-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013317 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6skr\" (UniqueName: \"kubernetes.io/projected/95ac97f9-f168-4470-a3dd-4097a7a4abc9-kube-api-access-c6skr\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013567 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9cg\" (UniqueName: \"kubernetes.io/projected/9c265449-14f8-4b89-b50c-7889b5d41c64-kube-api-access-7b9cg\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5395fe-a04c-4913-a749-f7316689b418-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013694 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-key\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l8zb\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.013805 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbm2k\" (UniqueName: \"kubernetes.io/projected/93c78ab4-fc39-46e0-9135-146854d02c0f-kube-api-access-gbm2k\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.014761 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5395fe-a04c-4913-a749-f7316689b418-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.014950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18a99257-541b-4b05-bdef-21c591879b90-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.019081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.020260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-default-certificate\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.024062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a5395fe-a04c-4913-a749-f7316689b418-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.024988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d98f64-908a-4500-aec4-8542ebf281d3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.024979 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-metrics-certs\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.026422 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/538d7a4a-0270-4948-a67f-69f1d297f371-stats-auth\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.028857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.028935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18a99257-541b-4b05-bdef-21c591879b90-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.028944 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.029458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.039408 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzljj\" (UniqueName: \"kubernetes.io/projected/18a99257-541b-4b05-bdef-21c591879b90-kube-api-access-xzljj\") pod \"openshift-apiserver-operator-796bbdcf4f-g6rsl\" (UID: \"18a99257-541b-4b05-bdef-21c591879b90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.057642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nswc\" (UniqueName: \"kubernetes.io/projected/7a5395fe-a04c-4913-a749-f7316689b418-kube-api-access-2nswc\") pod \"kube-storage-version-migrator-operator-b67b599dd-qcbs8\" (UID: \"7a5395fe-a04c-4913-a749-f7316689b418\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.078932 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qr849"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.089413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.108339 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88xzz\" (UniqueName: \"kubernetes.io/projected/7fbbef50-6a8d-4b24-ab17-b626c7d251d5-kube-api-access-88xzz\") pod \"multus-admission-controller-857f4d67dd-g9bkv\" (UID: \"7fbbef50-6a8d-4b24-ab17-b626c7d251d5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-apiservice-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a76d0f8-e02b-494e-849d-31a85ff80297-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e1ad4f-a43f-46c7-8fca-75a84adac372-serving-cert\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115869 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzmkd\" (UniqueName: \"kubernetes.io/projected/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-kube-api-access-bzmkd\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ac97f9-f168-4470-a3dd-4097a7a4abc9-cert\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c265449-14f8-4b89-b50c-7889b5d41c64-metrics-tls\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-node-bootstrap-token\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115956 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5bv\" (UniqueName: \"kubernetes.io/projected/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-kube-api-access-hj5bv\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.115992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116038 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhq7q\" (UniqueName: \"kubernetes.io/projected/0a76d0f8-e02b-494e-849d-31a85ff80297-kube-api-access-dhq7q\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c265449-14f8-4b89-b50c-7889b5d41c64-config-volume\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116098 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116115 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjpd5\" (UniqueName: \"kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhvn\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-kube-api-access-rnhvn\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116182 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8n77\" (UniqueName: \"kubernetes.io/projected/b8e1ad4f-a43f-46c7-8fca-75a84adac372-kube-api-access-m8n77\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116216 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxq59\" (UniqueName: \"kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116231 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116253 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-config\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-registration-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41b0b15a-6333-48e5-8111-90e0dbe246c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116357 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bsdv\" (UniqueName: \"kubernetes.io/projected/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-kube-api-access-4bsdv\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116390 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-proxy-tls\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116406 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41b0b15a-6333-48e5-8111-90e0dbe246c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbqs\" (UniqueName: \"kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-webhook-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116465 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-tmpfs\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116484 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a76d0f8-e02b-494e-849d-31a85ff80297-proxy-tls\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tt5x\" (UniqueName: \"kubernetes.io/projected/6aace82b-ec31-40e1-808f-f06962fb0bd4-kube-api-access-4tt5x\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-images\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116531 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-cabundle\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116563 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-srv-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxr64\" (UniqueName: \"kubernetes.io/projected/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-kube-api-access-sxr64\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-mountpoint-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116639 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6skr\" (UniqueName: \"kubernetes.io/projected/95ac97f9-f168-4470-a3dd-4097a7a4abc9-kube-api-access-c6skr\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116655 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9cg\" (UniqueName: \"kubernetes.io/projected/9c265449-14f8-4b89-b50c-7889b5d41c64-kube-api-access-7b9cg\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116690 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-key\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbm2k\" (UniqueName: \"kubernetes.io/projected/93c78ab4-fc39-46e0-9135-146854d02c0f-kube-api-access-gbm2k\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116749 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116766 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116799 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e1ad4f-a43f-46c7-8fca-75a84adac372-config\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116817 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-csi-data-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116833 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-socket-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-srv-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116871 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kg9\" (UniqueName: \"kubernetes.io/projected/25bd3d1b-ff4a-4369-af67-dea3889d9db3-kube-api-access-n7kg9\") pod \"migrator-59844c95c7-vg6sr\" (UID: \"25bd3d1b-ff4a-4369-af67-dea3889d9db3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116886 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-plugins-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116903 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8156833-621a-414d-9aab-83b8bceb2d09-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116920 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grqhh\" (UniqueName: \"kubernetes.io/projected/2bef3e32-812d-4ced-ab0d-440c1f7c535d-kube-api-access-grqhh\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb6g6\" (UniqueName: \"kubernetes.io/projected/e8156833-621a-414d-9aab-83b8bceb2d09-kube-api-access-qb6g6\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.116992 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qckb\" (UniqueName: \"kubernetes.io/projected/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-kube-api-access-2qckb\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.117008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-certs\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.117041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.117059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.117078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.120013 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.120523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c265449-14f8-4b89-b50c-7889b5d41c64-config-volume\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.121703 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.122702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.123054 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.124529 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-apiservice-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.124685 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.124811 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-webhook-cert\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.125533 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.126138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.126721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-config\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.127079 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.127622 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.128227 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a76d0f8-e02b-494e-849d-31a85ff80297-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.128510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-registration-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.128869 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.129041 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-mountpoint-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.129151 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.629136767 +0000 UTC m=+148.168824431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.130234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.130513 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.130991 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.131880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.132431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8e1ad4f-a43f-46c7-8fca-75a84adac372-config\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.133815 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41b0b15a-6333-48e5-8111-90e0dbe246c3-trusted-ca\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.137031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-tmpfs\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.138318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c265449-14f8-4b89-b50c-7889b5d41c64-metrics-tls\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.138552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.138635 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-images\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.138946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-plugins-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.139016 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8e1ad4f-a43f-46c7-8fca-75a84adac372-serving-cert\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.139883 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41b0b15a-6333-48e5-8111-90e0dbe246c3-metrics-tls\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.141235 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-key\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.142556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-csi-data-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.142743 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.142770 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/93c78ab4-fc39-46e0-9135-146854d02c0f-socket-dir\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.143624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.145205 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.145624 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2bef3e32-812d-4ced-ab0d-440c1f7c535d-signing-cabundle\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.146449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e8156833-621a-414d-9aab-83b8bceb2d09-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.146592 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-node-bootstrap-token\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.149838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.150906 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-proxy-tls\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.150931 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.151457 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/95ac97f9-f168-4470-a3dd-4097a7a4abc9-cert\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.151625 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.151823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.152060 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.152176 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6aace82b-ec31-40e1-808f-f06962fb0bd4-certs\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.152571 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a76d0f8-e02b-494e-849d-31a85ff80297-proxy-tls\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.153907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.154056 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.154682 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd5f8\" (UniqueName: \"kubernetes.io/projected/538d7a4a-0270-4948-a67f-69f1d297f371-kube-api-access-xd5f8\") pod \"router-default-5444994796-4sgbs\" (UID: \"538d7a4a-0270-4948-a67f-69f1d297f371\") " pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.154813 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-srv-cert\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.171563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-srv-cert\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.176633 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d98f64-908a-4500-aec4-8542ebf281d3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ccwrq\" (UID: \"b0d98f64-908a-4500-aec4-8542ebf281d3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.182467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l8zb\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.214761 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.220210 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.220703 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.720684512 +0000 UTC m=+148.260372176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.229209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-22vrd\" (UID: \"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.231394 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhvn\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-kube-api-access-rnhvn\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.231454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d9zlg"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.232092 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.240128 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.260604 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/41b0b15a-6333-48e5-8111-90e0dbe246c3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kznks\" (UID: \"41b0b15a-6333-48e5-8111-90e0dbe246c3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.261128 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.261243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.262393 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.267569 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjpd5\" (UniqueName: \"kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5\") pod \"oauth-openshift-558db77b4-qmjgl\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.281583 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.290037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6skr\" (UniqueName: \"kubernetes.io/projected/95ac97f9-f168-4470-a3dd-4097a7a4abc9-kube-api-access-c6skr\") pod \"ingress-canary-64v7m\" (UID: \"95ac97f9-f168-4470-a3dd-4097a7a4abc9\") " pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.300740 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.305030 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhq7q\" (UniqueName: \"kubernetes.io/projected/0a76d0f8-e02b-494e-849d-31a85ff80297-kube-api-access-dhq7q\") pod \"machine-config-controller-84d6567774-86wsv\" (UID: \"0a76d0f8-e02b-494e-849d-31a85ff80297\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.323278 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.332706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.334291 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.834269295 +0000 UTC m=+148.373956959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.340453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8n77\" (UniqueName: \"kubernetes.io/projected/b8e1ad4f-a43f-46c7-8fca-75a84adac372-kube-api-access-m8n77\") pod \"service-ca-operator-777779d784-l65qs\" (UID: \"b8e1ad4f-a43f-46c7-8fca-75a84adac372\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.342925 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6mn2d"] Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.343145 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.345705 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxq59\" (UniqueName: \"kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59\") pod \"collect-profiles-29461800-4xp92\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.363983 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5bv\" (UniqueName: \"kubernetes.io/projected/c9b8f39b-2b28-41a6-a477-0efe9e1637b8-kube-api-access-hj5bv\") pod \"machine-config-operator-74547568cd-9nkqd\" (UID: \"c9b8f39b-2b28-41a6-a477-0efe9e1637b8\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.367971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" Jan 06 14:02:09 crc kubenswrapper[4869]: W0106 14:02:09.369332 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34125ddb_6d12_42f3_9759_ba14a484f117.slice/crio-b6d3e9598d0b87992da6b0e0546cfe5cec43a700e3337309736256bb4eebfa84 WatchSource:0}: Error finding container b6d3e9598d0b87992da6b0e0546cfe5cec43a700e3337309736256bb4eebfa84: Status 404 returned error can't find the container with id b6d3e9598d0b87992da6b0e0546cfe5cec43a700e3337309736256bb4eebfa84 Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.375321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:09 crc kubenswrapper[4869]: W0106 14:02:09.375811 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49e49b04_3d85_4323_931e_d0d341d52650.slice/crio-fe404b57b733b5312efea2ce71fade640c1460f0922412ee73aa80905d79a2cb WatchSource:0}: Error finding container fe404b57b733b5312efea2ce71fade640c1460f0922412ee73aa80905d79a2cb: Status 404 returned error can't find the container with id fe404b57b733b5312efea2ce71fade640c1460f0922412ee73aa80905d79a2cb Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.391792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.402919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb6g6\" (UniqueName: \"kubernetes.io/projected/e8156833-621a-414d-9aab-83b8bceb2d09-kube-api-access-qb6g6\") pod \"package-server-manager-789f6589d5-9kkzq\" (UID: \"e8156833-621a-414d-9aab-83b8bceb2d09\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.408317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-64v7m" Jan 06 14:02:09 crc kubenswrapper[4869]: W0106 14:02:09.419795 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod538d7a4a_0270_4948_a67f_69f1d297f371.slice/crio-dc8e2e843a0a1b0685b95e8e59be7238612b8fadf5972f75f60cf7eb46a428c9 WatchSource:0}: Error finding container dc8e2e843a0a1b0685b95e8e59be7238612b8fadf5972f75f60cf7eb46a428c9: Status 404 returned error can't find the container with id dc8e2e843a0a1b0685b95e8e59be7238612b8fadf5972f75f60cf7eb46a428c9 Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.434612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qckb\" (UniqueName: \"kubernetes.io/projected/6bd88edc-2d9d-4456-8cbd-812d024b4ed6-kube-api-access-2qckb\") pod \"packageserver-d55dfcdfc-vs269\" (UID: \"6bd88edc-2d9d-4456-8cbd-812d024b4ed6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.435587 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.436029 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:09.936012641 +0000 UTC m=+148.475700305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.448445 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxr64\" (UniqueName: \"kubernetes.io/projected/9fd65c31-6572-4cd8-9d53-3d011e93e1a5-kube-api-access-sxr64\") pod \"catalog-operator-68c6474976-qxbrk\" (UID: \"9fd65c31-6572-4cd8-9d53-3d011e93e1a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.478951 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f6a89a2d-4f24-4e29-8c2d-60dfa652a641-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x46jf\" (UID: \"f6a89a2d-4f24-4e29-8c2d-60dfa652a641\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.483574 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9cg\" (UniqueName: \"kubernetes.io/projected/9c265449-14f8-4b89-b50c-7889b5d41c64-kube-api-access-7b9cg\") pod \"dns-default-hdc42\" (UID: \"9c265449-14f8-4b89-b50c-7889b5d41c64\") " pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.509214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbm2k\" (UniqueName: \"kubernetes.io/projected/93c78ab4-fc39-46e0-9135-146854d02c0f-kube-api-access-gbm2k\") pod \"csi-hostpathplugin-pnb6r\" (UID: \"93c78ab4-fc39-46e0-9135-146854d02c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.538789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.539207 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.039193676 +0000 UTC m=+148.578881340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.556960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzmkd\" (UniqueName: \"kubernetes.io/projected/bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2-kube-api-access-bzmkd\") pod \"control-plane-machine-set-operator-78cbb6b69f-f52zz\" (UID: \"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.560530 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bsdv\" (UniqueName: \"kubernetes.io/projected/f8dd0e44-71e5-4c75-bce5-4d4cc652cc18-kube-api-access-4bsdv\") pod \"olm-operator-6b444d44fb-pb4p6\" (UID: \"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.560891 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbqs\" (UniqueName: \"kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs\") pod \"marketplace-operator-79b997595-h6xlw\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.567287 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b9gld" event={"ID":"959dc13f-609b-4272-abe4-e26a0f79ab8c","Type":"ContainerStarted","Data":"808cd3bb6b0d93109d8f9993462ccdcaf50858a64d257c007b679212edf5923f"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.569530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.576942 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.581207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" event={"ID":"96e8a661-1f08-489b-afcb-18f86bf6d4e3","Type":"ContainerStarted","Data":"f978a47a8bb604dd9bfbc439187cfafed0321c336c7c77a2bc98aca6a4149c40"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.585868 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.593615 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.593686 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" event={"ID":"34125ddb-6d12-42f3-9759-ba14a484f117","Type":"ContainerStarted","Data":"b6d3e9598d0b87992da6b0e0546cfe5cec43a700e3337309736256bb4eebfa84"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.600048 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.601026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" event={"ID":"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad","Type":"ContainerStarted","Data":"fe5e5a04c9b3e990226e42d3dfa858b60ea90b3b2fcf8dc1545aecf45d15f47a"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.601077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" event={"ID":"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad","Type":"ContainerStarted","Data":"12d327a876abe8e2ba95c157bdd69ab12128144d2f3fa5bc5549cd21e560ab90"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.602111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tt5x\" (UniqueName: \"kubernetes.io/projected/6aace82b-ec31-40e1-808f-f06962fb0bd4-kube-api-access-4tt5x\") pod \"machine-config-server-svdhb\" (UID: \"6aace82b-ec31-40e1-808f-f06962fb0bd4\") " pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.604415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vx9gs" event={"ID":"f1d294f9-a755-49bc-bc10-5b4e9739a914","Type":"ContainerStarted","Data":"a77d31a91622c6d9296266c29ff1ee3df36c03e56bf6cea5148478ca1551c9c6"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.604474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-vx9gs" event={"ID":"f1d294f9-a755-49bc-bc10-5b4e9739a914","Type":"ContainerStarted","Data":"4181f22b90aedb6e1d5c9342a130f0543200f9bd5478c23036adf62410d0437d"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.604595 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kg9\" (UniqueName: \"kubernetes.io/projected/25bd3d1b-ff4a-4369-af67-dea3889d9db3-kube-api-access-n7kg9\") pod \"migrator-59844c95c7-vg6sr\" (UID: \"25bd3d1b-ff4a-4369-af67-dea3889d9db3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.604640 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.608695 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4sgbs" event={"ID":"538d7a4a-0270-4948-a67f-69f1d297f371","Type":"ContainerStarted","Data":"dc8e2e843a0a1b0685b95e8e59be7238612b8fadf5972f75f60cf7eb46a428c9"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.609174 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.613310 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.613408 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.623619 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grqhh\" (UniqueName: \"kubernetes.io/projected/2bef3e32-812d-4ced-ab0d-440c1f7c535d-kube-api-access-grqhh\") pod \"service-ca-9c57cc56f-9zcbm\" (UID: \"2bef3e32-812d-4ced-ab0d-440c1f7c535d\") " pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.630129 4869 generic.go:334] "Generic (PLEG): container finished" podID="8e5dcd19-170b-4d3a-b1f2-995f97fdad41" containerID="9a614ab633a1264593f6b88cb4024b6827a41ec6505d48410475283f3eac5148" exitCode=0 Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.630266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" event={"ID":"8e5dcd19-170b-4d3a-b1f2-995f97fdad41","Type":"ContainerDied","Data":"9a614ab633a1264593f6b88cb4024b6827a41ec6505d48410475283f3eac5148"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.630319 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" event={"ID":"8e5dcd19-170b-4d3a-b1f2-995f97fdad41","Type":"ContainerStarted","Data":"5c531aa8925099d65f6848a9ea9c9228da2f06ff085b837e1dacb3798caea2a6"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.635805 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.639938 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.640480 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.1404585 +0000 UTC m=+148.680146174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.640727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.640825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.640880 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.641007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.641064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.645536 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" event={"ID":"cd17fb22-d612-4949-8e94-f0aa870439d9","Type":"ContainerStarted","Data":"978f2263c563b6032ede404b1e611a2c31a326ead82fe4442d97bc5f4982b10f"} Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.647212 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.14719336 +0000 UTC m=+148.686881024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.648334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.649185 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.649725 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.652550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" event={"ID":"49e49b04-3d85-4323-931e-d0d341d52650","Type":"ContainerStarted","Data":"fe404b57b733b5312efea2ce71fade640c1460f0922412ee73aa80905d79a2cb"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.652967 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.659501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.668526 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.683872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.689026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" event={"ID":"312bcf02-2d7a-4ac1-87fd-25b2e1e42826","Type":"ContainerStarted","Data":"050968efce60f61f647c22815a9e7aee6f4249a10973a9d6bf4d292c5686c1ed"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.691090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr849" event={"ID":"ab544c1b-884d-47a9-9e75-b133b58ca4db","Type":"ContainerStarted","Data":"a19816b38a978b83052b5c10f0323e230af9f51f6b54df95df7084ae46beab1d"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.697357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" event={"ID":"7462c7be-1f9d-4f4b-a844-71a3518a27e2","Type":"ContainerStarted","Data":"e3cfe4009bb4d74694f2402453f3f123d8c3a712e1bafeba5e210d32671edb46"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.697411 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" event={"ID":"7462c7be-1f9d-4f4b-a844-71a3518a27e2","Type":"ContainerStarted","Data":"5c3c1e4011abea44aa97644cc0134e9719636264c420a2b247f1b9c8259470eb"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.698829 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" event={"ID":"f26c6409-5ba8-4b46-bb01-9a038091cdfd","Type":"ContainerStarted","Data":"d2c1e538eba1269ac1c42aa7010c5a0e344f0478d6184fec50c50feba933b9fe"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.709275 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.721403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.734360 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.735488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.738576 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.742942 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.744005 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.243976734 +0000 UTC m=+148.783664398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.744238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-svdhb" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.844689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.845053 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.345037883 +0000 UTC m=+148.884725557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.848770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" event={"ID":"62309e3d-7bdc-4573-8a0d-5b485f618ffe","Type":"ContainerStarted","Data":"970ffa0086a6b3e549d4ce5d7087f772667dd3ac550d2a04496d61c58965bb18"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.848906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" event={"ID":"70fb1714-50f4-4504-8912-1b0ed4fb508e","Type":"ContainerStarted","Data":"d436f044753d886ef73ea028642ebacea411b3f578c373ead28500fb13570bf6"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.848997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" event={"ID":"70fb1714-50f4-4504-8912-1b0ed4fb508e","Type":"ContainerStarted","Data":"5a09d064ba9ebcd80596f3ecf64ee5e7b205f766211075f65f183ad8589be5ad"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.849096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" event={"ID":"fcc80584-0b81-45b0-a790-539bfc78c894","Type":"ContainerStarted","Data":"4fb4438db1e286b50cbe4dbaf21765273f73eefd4c602f1dfd76d505b34f4083"} Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.915237 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" Jan 06 14:02:09 crc kubenswrapper[4869]: I0106 14:02:09.948375 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:09 crc kubenswrapper[4869]: E0106 14:02:09.948794 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.448774753 +0000 UTC m=+148.988462417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.051844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.052632 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.552615225 +0000 UTC m=+149.092302889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.053548 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.156091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.156877 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.656857398 +0000 UTC m=+149.196545072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.258301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.262156 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.762139169 +0000 UTC m=+149.301826833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.322418 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.327342 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kznks"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.347973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g9bkv"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.349618 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmjgl"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.351697 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.353303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.371957 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.372374 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.872357352 +0000 UTC m=+149.412045006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.475703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.476532 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:10.976511574 +0000 UTC m=+149.516199238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.586338 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.586611 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.086572072 +0000 UTC m=+149.626259736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.591082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.591494 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.091470073 +0000 UTC m=+149.631157737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: W0106 14:02:10.675842 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e2e3542_c34e_4dfb_b17f_7ed4b8b9a1f4.slice/crio-3534a4d4d13731d013da059490640f772027b3c58126825429ccc2de284377bb WatchSource:0}: Error finding container 3534a4d4d13731d013da059490640f772027b3c58126825429ccc2de284377bb: Status 404 returned error can't find the container with id 3534a4d4d13731d013da059490640f772027b3c58126825429ccc2de284377bb Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.703231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.703557 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.203541296 +0000 UTC m=+149.743228960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.804482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.804911 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.304898082 +0000 UTC m=+149.844585736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: W0106 14:02:10.851833 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a5395fe_a04c_4913_a749_f7316689b418.slice/crio-a051d8215373ce3a1d29552f671cbca898c65f995f57a90cecbf265fc57f0171 WatchSource:0}: Error finding container a051d8215373ce3a1d29552f671cbca898c65f995f57a90cecbf265fc57f0171: Status 404 returned error can't find the container with id a051d8215373ce3a1d29552f671cbca898c65f995f57a90cecbf265fc57f0171 Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.868856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" event={"ID":"34125ddb-6d12-42f3-9759-ba14a484f117","Type":"ContainerStarted","Data":"8851ad1be362b5392f53e8205fb155d44d1a72b7c982dd93a8b1e68c9e2f73af"} Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.901988 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-vx9gs" podStartSLOduration=130.901966353 podStartE2EDuration="2m10.901966353s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:10.899305583 +0000 UTC m=+149.438993247" watchObservedRunningTime="2026-01-06 14:02:10.901966353 +0000 UTC m=+149.441654017" Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.905288 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:10 crc kubenswrapper[4869]: E0106 14:02:10.905712 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.405693053 +0000 UTC m=+149.945380707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.907563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" event={"ID":"7fbbef50-6a8d-4b24-ab17-b626c7d251d5","Type":"ContainerStarted","Data":"76c78b934dfcbf892fffaaa7d92dbb76fa662aa52e735cd21655d0bcc04f7f1e"} Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.928746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" event={"ID":"7462c7be-1f9d-4f4b-a844-71a3518a27e2","Type":"ContainerStarted","Data":"ee2a1b343a635aef14b9e14df3d9955936023ec0bed344fae33b08f9d9d55b01"} Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.931185 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv"] Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.953102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b9gld" event={"ID":"959dc13f-609b-4272-abe4-e26a0f79ab8c","Type":"ContainerStarted","Data":"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f"} Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.986831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" event={"ID":"62309e3d-7bdc-4573-8a0d-5b485f618ffe","Type":"ContainerStarted","Data":"73b223811f77330bf6a56998258944630bea4ecc0804dc0ad1a8dc465e0770ac"} Jan 06 14:02:10 crc kubenswrapper[4869]: I0106 14:02:10.988997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" event={"ID":"18a99257-541b-4b05-bdef-21c591879b90","Type":"ContainerStarted","Data":"097ce1a79a94793b6efceb1269ae240a7463a51414c1e6d6d1bcdf3990dbedbc"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.008040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.011320 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.511300373 +0000 UTC m=+150.050988037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.019781 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" event={"ID":"41b0b15a-6333-48e5-8111-90e0dbe246c3","Type":"ContainerStarted","Data":"dd4a4e19546592ec06536d87c21da118ff32e0aa298e81de899c5dacdf930217"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.056439 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pgftz" podStartSLOduration=131.056403997 podStartE2EDuration="2m11.056403997s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.024193207 +0000 UTC m=+149.563880871" watchObservedRunningTime="2026-01-06 14:02:11.056403997 +0000 UTC m=+149.596091661" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.058683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" event={"ID":"be5c9ed4-3ad3-4db5-89f2-0eb5f4e4e4ad","Type":"ContainerStarted","Data":"b24ef3f504cff9ba37cf4c988c88116768d74fbd11044b72f93bbb187a1de82b"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.113075 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.114748 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.614727204 +0000 UTC m=+150.154414868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.143454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" event={"ID":"49e49b04-3d85-4323-931e-d0d341d52650","Type":"ContainerStarted","Data":"4fa9771561c24e703cd6516ccda08c735a749128e37ae595b623be54c15c2101"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.145003 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.151514 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4sgbs" event={"ID":"538d7a4a-0270-4948-a67f-69f1d297f371","Type":"ContainerStarted","Data":"cdb451c3bed4725643630a270f442113607b33a79109f6deba32e4e6f72580da"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.158382 4869 generic.go:334] "Generic (PLEG): container finished" podID="ab544c1b-884d-47a9-9e75-b133b58ca4db" containerID="7fb70e8d5e2a357986d77bef821116353bcae7371f6882877c9b28b985d830bf" exitCode=0 Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.158498 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr849" event={"ID":"ab544c1b-884d-47a9-9e75-b133b58ca4db","Type":"ContainerDied","Data":"7fb70e8d5e2a357986d77bef821116353bcae7371f6882877c9b28b985d830bf"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.206571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" event={"ID":"cd17fb22-d612-4949-8e94-f0aa870439d9","Type":"ContainerStarted","Data":"6062791677ae424edae5154e54750bf3ddeb28bcd76b823d4445de15b1873f88"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.207975 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.211200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svdhb" event={"ID":"6aace82b-ec31-40e1-808f-f06962fb0bd4","Type":"ContainerStarted","Data":"5a02a27fcec1d0774819963d938ce6a0daf8c82829eee3c3e175ed5e395581a8"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.214467 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.216473 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.218030 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.718006322 +0000 UTC m=+150.257693986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.233558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" event={"ID":"312bcf02-2d7a-4ac1-87fd-25b2e1e42826","Type":"ContainerStarted","Data":"9cb06f13d0dc8fface2ee2821b8d2c560059cae31ac3e535b7e46a3f4ebc7ed9"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.235126 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.253313 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-8t96r" podStartSLOduration=131.253294684 podStartE2EDuration="2m11.253294684s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.253042287 +0000 UTC m=+149.792729961" watchObservedRunningTime="2026-01-06 14:02:11.253294684 +0000 UTC m=+149.792982348" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.315627 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.318001 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.817971882 +0000 UTC m=+150.357659546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.318814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.323646 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.823626663 +0000 UTC m=+150.363314327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.327319 4869 patch_prober.go:28] interesting pod/console-operator-58897d9998-6mn2d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.327425 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" podUID="49e49b04-3d85-4323-931e-d0d341d52650" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/readyz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.330502 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xv5r2" podStartSLOduration=131.330475436 podStartE2EDuration="2m11.330475436s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.283685506 +0000 UTC m=+149.823373170" watchObservedRunningTime="2026-01-06 14:02:11.330475436 +0000 UTC m=+149.870163100" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.347093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.382485 4869 generic.go:334] "Generic (PLEG): container finished" podID="96e8a661-1f08-489b-afcb-18f86bf6d4e3" containerID="3b8d03de836916f81907ec6ebd7d8ab01818294a6d00c3e26a89fb9e8943cba1" exitCode=0 Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.383504 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" event={"ID":"96e8a661-1f08-489b-afcb-18f86bf6d4e3","Type":"ContainerDied","Data":"3b8d03de836916f81907ec6ebd7d8ab01818294a6d00c3e26a89fb9e8943cba1"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.400882 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hgpcv" podStartSLOduration=132.400854954 podStartE2EDuration="2m12.400854954s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.331114662 +0000 UTC m=+149.870802326" watchObservedRunningTime="2026-01-06 14:02:11.400854954 +0000 UTC m=+149.940542618" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.420387 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.421121 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:11.921101835 +0000 UTC m=+150.460789499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.431259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" event={"ID":"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4","Type":"ContainerStarted","Data":"3534a4d4d13731d013da059490640f772027b3c58126825429ccc2de284377bb"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.451698 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" podStartSLOduration=130.45163596 podStartE2EDuration="2m10.45163596s" podCreationTimestamp="2026-01-06 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.450440298 +0000 UTC m=+149.990127982" watchObservedRunningTime="2026-01-06 14:02:11.45163596 +0000 UTC m=+149.991323624" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.454251 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b9gld" podStartSLOduration=131.454175288 podStartE2EDuration="2m11.454175288s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.405210761 +0000 UTC m=+149.944898425" watchObservedRunningTime="2026-01-06 14:02:11.454175288 +0000 UTC m=+149.993862962" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.510688 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:11 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:11 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:11 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.511096 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.548252 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sc7mj" podStartSLOduration=132.548231329 podStartE2EDuration="2m12.548231329s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.534547924 +0000 UTC m=+150.074235588" watchObservedRunningTime="2026-01-06 14:02:11.548231329 +0000 UTC m=+150.087918983" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.554334 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.557738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.565314 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.065287695 +0000 UTC m=+150.604975349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.599605 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4sgbs" podStartSLOduration=131.599578241 podStartE2EDuration="2m11.599578241s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.579113144 +0000 UTC m=+150.118800808" watchObservedRunningTime="2026-01-06 14:02:11.599578241 +0000 UTC m=+150.139265915" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.607992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" event={"ID":"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399","Type":"ContainerStarted","Data":"d2a4b558d898196ea8afc6fde69fbf62f205e9152bc43d3842834cfc7ddfe50a"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.622142 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" podStartSLOduration=131.622105722 podStartE2EDuration="2m11.622105722s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.60855636 +0000 UTC m=+150.148244014" watchObservedRunningTime="2026-01-06 14:02:11.622105722 +0000 UTC m=+150.161793386" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.659473 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.659885 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.15986781 +0000 UTC m=+150.699555474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.690403 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.690867 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.690533 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" event={"ID":"fcc80584-0b81-45b0-a790-539bfc78c894","Type":"ContainerStarted","Data":"1a519af15f51e2d28c39096c8a0c051302292cff16cceb4702e916bc12b4d0ae"} Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.771605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.773602 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.273587497 +0000 UTC m=+150.813275161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.784420 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" podStartSLOduration=131.784387125 podStartE2EDuration="2m11.784387125s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.714427617 +0000 UTC m=+150.254115281" watchObservedRunningTime="2026-01-06 14:02:11.784387125 +0000 UTC m=+150.324074789" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.873215 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.876287 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.374874301 +0000 UTC m=+150.914561965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.942715 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vh62x" podStartSLOduration=131.942693882 podStartE2EDuration="2m11.942693882s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:11.828691898 +0000 UTC m=+150.368379572" watchObservedRunningTime="2026-01-06 14:02:11.942693882 +0000 UTC m=+150.482381546" Jan 06 14:02:11 crc kubenswrapper[4869]: I0106 14:02:11.979890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:11 crc kubenswrapper[4869]: E0106 14:02:11.980153 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.480142462 +0000 UTC m=+151.019830116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.094060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.094582 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.594564607 +0000 UTC m=+151.134252271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.207724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.208128 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.708114229 +0000 UTC m=+151.247801893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.230075 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:12 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:12 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:12 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.230481 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.240148 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.274741 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-l65qs"] Jan 06 14:02:12 crc kubenswrapper[4869]: W0106 14:02:12.289851 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f52f78b_eb13_45bc_bf05_d1c138781664.slice/crio-cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9 WatchSource:0}: Error finding container cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9: Status 404 returned error can't find the container with id cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9 Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.309437 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.310319 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.810298087 +0000 UTC m=+151.349985751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.311805 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.415080 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.415434 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:12.915419544 +0000 UTC m=+151.455107208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.525638 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.526130 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.026087119 +0000 UTC m=+151.565774783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.526429 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.526836 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.026816739 +0000 UTC m=+151.566504403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.554167 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.641058 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.641764 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.141743037 +0000 UTC m=+151.681430701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.719411 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.745235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.745853 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.245796135 +0000 UTC m=+151.785483799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.750865 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.835774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" event={"ID":"2f52f78b-eb13-45bc-bf05-d1c138781664","Type":"ContainerStarted","Data":"cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9"} Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.848973 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.849394 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.349377441 +0000 UTC m=+151.889065105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.872613 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz"] Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.890568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" event={"ID":"7e2e3542-c34e-4dfb-b17f-7ed4b8b9a1f4","Type":"ContainerStarted","Data":"5b26cb7e0dcc320d2f6ccc05539d1405f63642c9b6b2a586307dcc940f3f04bf"} Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.899819 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" event={"ID":"f26c6409-5ba8-4b46-bb01-9a038091cdfd","Type":"ContainerStarted","Data":"ed24678cd1014f2ad1eb5d570a18dc89f5066bebfa1901c5f6bdec015ea480d5"} Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.899874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" event={"ID":"f26c6409-5ba8-4b46-bb01-9a038091cdfd","Type":"ContainerStarted","Data":"41e36f7d4023479b94913426802d60e79339272c1da652bf9a6439d419bf6379"} Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.954268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" event={"ID":"7fbbef50-6a8d-4b24-ab17-b626c7d251d5","Type":"ContainerStarted","Data":"d53daa1fc84d1cb968dd9a550bc24ec4d85d13390151f29ea718c27917f343dd"} Jan 06 14:02:12 crc kubenswrapper[4869]: I0106 14:02:12.962102 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:12 crc kubenswrapper[4869]: E0106 14:02:12.963641 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.463605781 +0000 UTC m=+152.003293445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.004454 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.048072 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-22vrd" podStartSLOduration=133.048051326 podStartE2EDuration="2m13.048051326s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.019198776 +0000 UTC m=+151.558886440" watchObservedRunningTime="2026-01-06 14:02:13.048051326 +0000 UTC m=+151.587738990" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.048907 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.062037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" event={"ID":"41b0b15a-6333-48e5-8111-90e0dbe246c3","Type":"ContainerStarted","Data":"ad5adb3878be4e8adbc44858aac5600ccb18a9594b64da81c2b5708467b8afc8"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.070784 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.071527 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.571509512 +0000 UTC m=+152.111197176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.072809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hdc42"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.096097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" event={"ID":"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399","Type":"ContainerStarted","Data":"06708b12658cbe944dbfd4e6a5bb6088ee54168f9cee11c44cb213ca7afb6b82"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.096148 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" event={"ID":"0f66ca0c-0cf9-40d8-9ed3-e55a3ce6a399","Type":"ContainerStarted","Data":"a7d71cc77738943bfaba9ba8b8d517b0f3208eb7f84e7705b5192ef21c10288d"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.136020 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.139580 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d9zlg" podStartSLOduration=133.13955676 podStartE2EDuration="2m13.13955676s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.081460528 +0000 UTC m=+151.621148192" watchObservedRunningTime="2026-01-06 14:02:13.13955676 +0000 UTC m=+151.679244424" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.150033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" event={"ID":"58ee4883-a1a6-425c-b079-059119125791","Type":"ContainerStarted","Data":"fe0f54845059a9f59607629a9b180cb561427bce9e27e5d20e35878fd811d277"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.154449 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.155695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.158602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-64v7m"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.167629 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" podStartSLOduration=133.167614399 podStartE2EDuration="2m13.167614399s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.166075468 +0000 UTC m=+151.705763132" watchObservedRunningTime="2026-01-06 14:02:13.167614399 +0000 UTC m=+151.707302063" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.184780 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.185163 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.685146786 +0000 UTC m=+152.224834450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.185421 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qmjgl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" start-of-body= Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.185454 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.38:6443/healthz\": dial tcp 10.217.0.38:6443: connect: connection refused" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.211852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" event={"ID":"8e5dcd19-170b-4d3a-b1f2-995f97fdad41","Type":"ContainerStarted","Data":"486730966f657bb563372c3dc9a58bc515560c56a199f3842c1bb74885a897d5"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.214055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.241546 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:13 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:13 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:13 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.241618 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.254889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" event={"ID":"b8e1ad4f-a43f-46c7-8fca-75a84adac372","Type":"ContainerStarted","Data":"f9147e331427d33677a19582deae900ceeff2df53151d2c0808675a64319849e"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.257528 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7tlrk" podStartSLOduration=134.257490418 podStartE2EDuration="2m14.257490418s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.213621526 +0000 UTC m=+151.753309190" watchObservedRunningTime="2026-01-06 14:02:13.257490418 +0000 UTC m=+151.797178082" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.259546 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" podStartSLOduration=133.259538333 podStartE2EDuration="2m13.259538333s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.258622579 +0000 UTC m=+151.798310243" watchObservedRunningTime="2026-01-06 14:02:13.259538333 +0000 UTC m=+151.799225997" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.265720 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" event={"ID":"b0d98f64-908a-4500-aec4-8542ebf281d3","Type":"ContainerStarted","Data":"fd3fdbed9903f7efdc807d81a4a85b016d17a7f52c176835a1960bd9c340c16f"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.265766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" event={"ID":"b0d98f64-908a-4500-aec4-8542ebf281d3","Type":"ContainerStarted","Data":"9605e4bfec953debefc1ba36d44458ea7620827fcefcf2e7e4b262fa14c6b69e"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.289595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.291254 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.791218949 +0000 UTC m=+152.330906613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.309614 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" podStartSLOduration=133.309589889 podStartE2EDuration="2m13.309589889s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.303829456 +0000 UTC m=+151.843517140" watchObservedRunningTime="2026-01-06 14:02:13.309589889 +0000 UTC m=+151.849277553" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.314658 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" event={"ID":"7a5395fe-a04c-4913-a749-f7316689b418","Type":"ContainerStarted","Data":"7751c73b63bfec14f3df38f903f8faf985e21adeda1552df1d1ef84762eac851"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.314728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" event={"ID":"7a5395fe-a04c-4913-a749-f7316689b418","Type":"ContainerStarted","Data":"a051d8215373ce3a1d29552f671cbca898c65f995f57a90cecbf265fc57f0171"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.348070 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ccwrq" podStartSLOduration=133.348049436 podStartE2EDuration="2m13.348049436s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.339351254 +0000 UTC m=+151.879038918" watchObservedRunningTime="2026-01-06 14:02:13.348049436 +0000 UTC m=+151.887737100" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.351330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" event={"ID":"0a76d0f8-e02b-494e-849d-31a85ff80297","Type":"ContainerStarted","Data":"d311db0cb3541d6113f8eae9dc32d25c09060932e7f9f87206ec4b7bb8492eb7"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.351381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" event={"ID":"0a76d0f8-e02b-494e-849d-31a85ff80297","Type":"ContainerStarted","Data":"fc7152fa9422b1a1f13aac14dfafb20b3240db1d0ef5c1a8d274bca9d9d97f45"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.364852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" event={"ID":"c9b8f39b-2b28-41a6-a477-0efe9e1637b8","Type":"ContainerStarted","Data":"0911e4258ca3c7a081f737cfd8e4457572860ed3ee73e08ab8907a1a90f75f38"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.390774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" event={"ID":"18a99257-541b-4b05-bdef-21c591879b90","Type":"ContainerStarted","Data":"a1cc7e3b79412d7c637fd1de462da6366d2800a32f663f5e935dbcf869ecf50f"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.391497 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.392199 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:13.892181455 +0000 UTC m=+152.431869119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.393108 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9zcbm"] Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.393414 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qcbs8" podStartSLOduration=133.393398457 podStartE2EDuration="2m13.393398457s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.390322875 +0000 UTC m=+151.930010539" watchObservedRunningTime="2026-01-06 14:02:13.393398457 +0000 UTC m=+151.933086121" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.395708 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-svdhb" event={"ID":"6aace82b-ec31-40e1-808f-f06962fb0bd4","Type":"ContainerStarted","Data":"c9b9e18aebfb603d64f54cd402a969dcab9253babc454a11b30b9b7c3b914ea9"} Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.443602 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6mn2d" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.481697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-svdhb" podStartSLOduration=7.481654254 podStartE2EDuration="7.481654254s" podCreationTimestamp="2026-01-06 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.480845682 +0000 UTC m=+152.020533346" watchObservedRunningTime="2026-01-06 14:02:13.481654254 +0000 UTC m=+152.021341918" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.481822 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g6rsl" podStartSLOduration=133.481817848 podStartE2EDuration="2m13.481817848s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:13.434107924 +0000 UTC m=+151.973795588" watchObservedRunningTime="2026-01-06 14:02:13.481817848 +0000 UTC m=+152.021505512" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.495060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.513745 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.01372231 +0000 UTC m=+152.553409974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.596521 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.600779 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.100761364 +0000 UTC m=+152.640449028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: W0106 14:02:13.612603 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25bd3d1b_ff4a_4369_af67_dea3889d9db3.slice/crio-9fa1ee8a927a580c93796536f1127a03560201fdfcd316acf29411fc1734a843 WatchSource:0}: Error finding container 9fa1ee8a927a580c93796536f1127a03560201fdfcd316acf29411fc1734a843: Status 404 returned error can't find the container with id 9fa1ee8a927a580c93796536f1127a03560201fdfcd316acf29411fc1734a843 Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.651845 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.747286 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.772943 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.272915491 +0000 UTC m=+152.812603155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.778083 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.779595 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.279581459 +0000 UTC m=+152.819269123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.891134 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.891612 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.3915931 +0000 UTC m=+152.931280764 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:13 crc kubenswrapper[4869]: I0106 14:02:13.998551 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:13 crc kubenswrapper[4869]: E0106 14:02:13.999313 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.499269155 +0000 UTC m=+153.038956819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.059014 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pnb6r"] Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.101171 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.101633 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.601602717 +0000 UTC m=+153.141290381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.202511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.203132 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.703119277 +0000 UTC m=+153.242806931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.242532 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:14 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:14 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:14 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.242604 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.315932 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.316246 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.816224338 +0000 UTC m=+153.355912002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.417019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.417361 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:14.917349247 +0000 UTC m=+153.457036911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.448925 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" event={"ID":"2f52f78b-eb13-45bc-bf05-d1c138781664","Type":"ContainerStarted","Data":"ac847b6b32460687045ab4180b0b84142fc56f873cc7b4a8a4f056d9c0660d3b"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.475483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" event={"ID":"58ee4883-a1a6-425c-b079-059119125791","Type":"ContainerStarted","Data":"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.506505 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" event={"ID":"9fd65c31-6572-4cd8-9d53-3d011e93e1a5","Type":"ContainerStarted","Data":"ef210582b37f1d22c54196003fb1ae7af49b0ab13e4c57e22e6219b2f9e02faa"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.508031 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kznks" event={"ID":"41b0b15a-6333-48e5-8111-90e0dbe246c3","Type":"ContainerStarted","Data":"3cb8360105073813e4591ad8e1e1ed49c6b9ad4413973836953dc8558efc87cc"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.510142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" event={"ID":"b8e1ad4f-a43f-46c7-8fca-75a84adac372","Type":"ContainerStarted","Data":"699cfed5a864a5f163a13c326f6c40395cc919d80a62914d77beb63eb60e18ea"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.511368 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f14387864e946cd8d661a46ee4334af9fbc72a6d6aa80e0c559e2df5c94c92ff"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.519598 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.520246 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.020225675 +0000 UTC m=+153.559913339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.529927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"74e759f4551572694ef0016087635fc0199270bc022b5258b260926c9d75cb97"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.541721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" event={"ID":"f6a89a2d-4f24-4e29-8c2d-60dfa652a641","Type":"ContainerStarted","Data":"772522ca02f3be9591d83731a405cc74aa267c7b7a949233ff00a7b47b0dd1ee"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.547529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hdc42" event={"ID":"9c265449-14f8-4b89-b50c-7889b5d41c64","Type":"ContainerStarted","Data":"9144aaab96992daf406d3ab5eee915839a307a7bdc57c4fcada702485b60ab97"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.563230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" event={"ID":"93c78ab4-fc39-46e0-9135-146854d02c0f","Type":"ContainerStarted","Data":"b2da2580f21fa4c0c839a427c843babc6b69315b335f0b55e3586145eaac75f0"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.570940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" event={"ID":"2bef3e32-812d-4ced-ab0d-440c1f7c535d","Type":"ContainerStarted","Data":"d179b92960ce2a1373de198db87babe99f5100b33fd58367010c00b28aa7ccc5"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.613842 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" podStartSLOduration=134.613819914 podStartE2EDuration="2m14.613819914s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:14.60621177 +0000 UTC m=+153.145899434" watchObservedRunningTime="2026-01-06 14:02:14.613819914 +0000 UTC m=+153.153507568" Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.616024 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" event={"ID":"96e8a661-1f08-489b-afcb-18f86bf6d4e3","Type":"ContainerStarted","Data":"0df413b130e1e16fa2d0297491f5cea401db13ba4e261af555a99a5acba8f71b"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.623513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.624698 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.124680293 +0000 UTC m=+153.664367957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.627732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" event={"ID":"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18","Type":"ContainerStarted","Data":"27e3408bea2984fa129259655df34d700c66fd59d49c3d6c0e1c0176ea87d472"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.628726 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-64v7m" event={"ID":"95ac97f9-f168-4470-a3dd-4097a7a4abc9","Type":"ContainerStarted","Data":"73de18a74f7d16020404543e2256f0ef9f45d9afb5986dcd0e91216a519a222a"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.640712 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" event={"ID":"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2","Type":"ContainerStarted","Data":"43fcde93147316e0129cff7711c0446d3af6372792227b01d4efbd257c7bcbf4"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.643955 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" event={"ID":"6bd88edc-2d9d-4456-8cbd-812d024b4ed6","Type":"ContainerStarted","Data":"55c309173a61c7932b4633f4fd3920703734739fcc9c5096f5816b740b4573c4"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.661248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr849" event={"ID":"ab544c1b-884d-47a9-9e75-b133b58ca4db","Type":"ContainerStarted","Data":"9282f7561952569b912153fe140a38cb9a10e8e05fcb40ce296191a22d424802"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.700825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" event={"ID":"25bd3d1b-ff4a-4369-af67-dea3889d9db3","Type":"ContainerStarted","Data":"9fa1ee8a927a580c93796536f1127a03560201fdfcd316acf29411fc1734a843"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.710015 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0c21d2d3247bd146c000292b42b49fa43da7d600a4a49f91d457539d4413b2cc"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.726711 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.728408 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.228381672 +0000 UTC m=+153.768069516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.744139 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" event={"ID":"c9b8f39b-2b28-41a6-a477-0efe9e1637b8","Type":"ContainerStarted","Data":"c07a97ee3747f0a8b2f316752134cbc984b2afa5dbb04a8fae9050148e05aab1"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.750929 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" event={"ID":"e0f471c5-8336-42d0-84ff-6e85011cea0a","Type":"ContainerStarted","Data":"c8e2101c3530a4a6c91eb105f8ea1ac261f37002a8c324762b9721773d2f9ebc"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.768045 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" event={"ID":"e8156833-621a-414d-9aab-83b8bceb2d09","Type":"ContainerStarted","Data":"e310f2ccbda199ecb2ed096ea0fac7c3c0b26fc2fef2bd26e85f3737d25d1a35"} Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.828207 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.828849 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.328833655 +0000 UTC m=+153.868521319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.878502 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-l65qs" podStartSLOduration=133.878481181 podStartE2EDuration="2m13.878481181s" podCreationTimestamp="2026-01-06 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:14.778027608 +0000 UTC m=+153.317715292" watchObservedRunningTime="2026-01-06 14:02:14.878481181 +0000 UTC m=+153.418168845" Jan 06 14:02:14 crc kubenswrapper[4869]: I0106 14:02:14.940938 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:14 crc kubenswrapper[4869]: E0106 14:02:14.943644 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.443612569 +0000 UTC m=+153.983300233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.042907 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.043721 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.543646891 +0000 UTC m=+154.083334545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.149309 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.150452 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.6503592 +0000 UTC m=+154.190046874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.230618 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:15 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:15 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:15 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.230694 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.253859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.254297 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.754280155 +0000 UTC m=+154.293967819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.355450 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.355874 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.855859106 +0000 UTC m=+154.395546770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.374817 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dgtcf" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.415851 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" podStartSLOduration=135.415827428 podStartE2EDuration="2m15.415827428s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:14.879632551 +0000 UTC m=+153.419320235" watchObservedRunningTime="2026-01-06 14:02:15.415827428 +0000 UTC m=+153.955515092" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.457449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.458166 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:15.958147858 +0000 UTC m=+154.497835522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.479203 4869 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-qmjgl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.38:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.479300 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.38:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.558653 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.559278 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.059257908 +0000 UTC m=+154.598945572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.683543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.683938 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.183925587 +0000 UTC m=+154.723613241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.784862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.785028 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.284994055 +0000 UTC m=+154.824681719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.785545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.785906 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.285890669 +0000 UTC m=+154.825578333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.849822 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" event={"ID":"2bef3e32-812d-4ced-ab0d-440c1f7c535d","Type":"ContainerStarted","Data":"322fcc217d165fbb97c9862a8d7dfc22fe443c96fa620b7af28811b2d92023ea"} Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.876002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-64v7m" event={"ID":"95ac97f9-f168-4470-a3dd-4097a7a4abc9","Type":"ContainerStarted","Data":"7a823b64da2fd5ac5a2e85f3c6755019c4c4e92e79ac599a210f005c2c9924fc"} Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.887183 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.889481 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.389458525 +0000 UTC m=+154.929146189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.914861 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9zcbm" podStartSLOduration=134.914838152 podStartE2EDuration="2m14.914838152s" podCreationTimestamp="2026-01-06 14:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:15.876408846 +0000 UTC m=+154.416096510" watchObservedRunningTime="2026-01-06 14:02:15.914838152 +0000 UTC m=+154.454525816" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.916420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" event={"ID":"9fd65c31-6572-4cd8-9d53-3d011e93e1a5","Type":"ContainerStarted","Data":"0c715831b49d1195fcb4beecd77b3c63218e139a7f1235c6ae5bc92c6d74bab8"} Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.917421 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.949774 4869 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-qxbrk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.949860 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" podUID="9fd65c31-6572-4cd8-9d53-3d011e93e1a5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.955146 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" event={"ID":"6bd88edc-2d9d-4456-8cbd-812d024b4ed6","Type":"ContainerStarted","Data":"9ac183f394eb3b384103a7aee0f9e65861f889de2d7b2fcbcb75238ccf6fa20f"} Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.956346 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.969702 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vs269 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.969776 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" podUID="6bd88edc-2d9d-4456-8cbd-812d024b4ed6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.971799 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" podStartSLOduration=135.971772882 podStartE2EDuration="2m15.971772882s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:15.970067936 +0000 UTC m=+154.509755600" watchObservedRunningTime="2026-01-06 14:02:15.971772882 +0000 UTC m=+154.511460536" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.972124 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-64v7m" podStartSLOduration=9.972118582 podStartE2EDuration="9.972118582s" podCreationTimestamp="2026-01-06 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:15.92787505 +0000 UTC m=+154.467562714" watchObservedRunningTime="2026-01-06 14:02:15.972118582 +0000 UTC m=+154.511806246" Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.988394 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:15 crc kubenswrapper[4869]: E0106 14:02:15.989904 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.489886886 +0000 UTC m=+155.029574550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:15 crc kubenswrapper[4869]: I0106 14:02:15.992249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" event={"ID":"0a76d0f8-e02b-494e-849d-31a85ff80297","Type":"ContainerStarted","Data":"4d19ee477cf6ef9707be781842b2fc6756a9c6dc0d2b962823cf2d33ac669ce4"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.012776 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" podStartSLOduration=136.012759367 podStartE2EDuration="2m16.012759367s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.010205049 +0000 UTC m=+154.549892713" watchObservedRunningTime="2026-01-06 14:02:16.012759367 +0000 UTC m=+154.552447031" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.020165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" event={"ID":"7fbbef50-6a8d-4b24-ab17-b626c7d251d5","Type":"ContainerStarted","Data":"8af873a4a319007df651f8a74ec591b6ec93d481307626613c2d8c1c1d29cd8b"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.039844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" event={"ID":"e8156833-621a-414d-9aab-83b8bceb2d09","Type":"ContainerStarted","Data":"d015da75407b44ae5645a07d30654cce94165079b7d498e71e146074e630be34"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.050213 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-86wsv" podStartSLOduration=136.050193176 podStartE2EDuration="2m16.050193176s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.049785755 +0000 UTC m=+154.589473439" watchObservedRunningTime="2026-01-06 14:02:16.050193176 +0000 UTC m=+154.589880840" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.104356 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.105787 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.605750809 +0000 UTC m=+155.145438533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.106003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.106542 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e4bbb7cd39c54b402e6530bdcd57b0cbadf5ef6d406cb327bdbda417b269f8c2"} Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.106702 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.606692255 +0000 UTC m=+155.146379919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.140121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" event={"ID":"f6a89a2d-4f24-4e29-8c2d-60dfa652a641","Type":"ContainerStarted","Data":"40a0f219b099014d526cda1a1532ae471a91f8c9b06620f8a9eff9c780848a21"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.142682 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g9bkv" podStartSLOduration=136.142646544 podStartE2EDuration="2m16.142646544s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.107221099 +0000 UTC m=+154.646908753" watchObservedRunningTime="2026-01-06 14:02:16.142646544 +0000 UTC m=+154.682334208" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.146892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" event={"ID":"25bd3d1b-ff4a-4369-af67-dea3889d9db3","Type":"ContainerStarted","Data":"f2403a11f4f0958a79cee52d8163a81df24741d51f210929817dcb21d6338aba"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.156040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c9b52a4e3b4fbd6f61cd2cdf72de539f1bbcdd57bf37ea8b62dcaa3e1e8046df"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.156727 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.157856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" event={"ID":"f8dd0e44-71e5-4c75-bce5-4d4cc652cc18","Type":"ContainerStarted","Data":"2eea02a6464460da1c228d9b3ba41b0c5176f735ab59601a747a6e487cd9322b"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.158762 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.172644 4869 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pb4p6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.172737 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" podUID="f8dd0e44-71e5-4c75-bce5-4d4cc652cc18" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.196680 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x46jf" podStartSLOduration=136.196646737 podStartE2EDuration="2m16.196646737s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.195962639 +0000 UTC m=+154.735650303" watchObservedRunningTime="2026-01-06 14:02:16.196646737 +0000 UTC m=+154.736334401" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.201070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hdc42" event={"ID":"9c265449-14f8-4b89-b50c-7889b5d41c64","Type":"ContainerStarted","Data":"58db5e34efceddfb18b902fdb21971a3cadb917adad4058679c588b402a669bc"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.207928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.208326 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.708293627 +0000 UTC m=+155.247981421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.218689 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0702ab4cc0effacd553c501fbcd6f4225902d0b85e26164a946cee2ebad3560d"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.230172 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" event={"ID":"bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2","Type":"ContainerStarted","Data":"0001a5faf640071a3c8bf9ed14e9d3a4f43708be9b8a944621508aa978e061fe"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.254001 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:16 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:16 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:16 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.254083 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.289528 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" event={"ID":"e0f471c5-8336-42d0-84ff-6e85011cea0a","Type":"ContainerStarted","Data":"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.299980 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.309085 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.309421 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.809408408 +0000 UTC m=+155.349096072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.311564 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" podStartSLOduration=136.311548375 podStartE2EDuration="2m16.311548375s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.252153579 +0000 UTC m=+154.791841243" watchObservedRunningTime="2026-01-06 14:02:16.311548375 +0000 UTC m=+154.851236039" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.325574 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h6xlw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.325639 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.325863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" event={"ID":"c9b8f39b-2b28-41a6-a477-0efe9e1637b8","Type":"ContainerStarted","Data":"67daa49b2e2901c0f1da13daac2b148f28b5061762115e0d91e8a9accbf1bf94"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.349402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qr849" event={"ID":"ab544c1b-884d-47a9-9e75-b133b58ca4db","Type":"ContainerStarted","Data":"8c023ace94369184574127059ce2a04a14933205ac5c3f0634a401a1c49385f6"} Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.364217 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.401820 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" podStartSLOduration=136.401793045 podStartE2EDuration="2m16.401793045s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.375429571 +0000 UTC m=+154.915117235" watchObservedRunningTime="2026-01-06 14:02:16.401793045 +0000 UTC m=+154.941480709" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.403234 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f52zz" podStartSLOduration=136.403225073 podStartE2EDuration="2m16.403225073s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.400435688 +0000 UTC m=+154.940123352" watchObservedRunningTime="2026-01-06 14:02:16.403225073 +0000 UTC m=+154.942912737" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.415168 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.417427 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:16.917398601 +0000 UTC m=+155.457086275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.520347 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.521104 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.02109046 +0000 UTC m=+155.560778124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.538132 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qr849" podStartSLOduration=137.538102044 podStartE2EDuration="2m17.538102044s" podCreationTimestamp="2026-01-06 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.533554543 +0000 UTC m=+155.073242227" watchObservedRunningTime="2026-01-06 14:02:16.538102044 +0000 UTC m=+155.077789708" Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.622316 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.623032 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.123011871 +0000 UTC m=+155.662699545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.726439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.726840 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.226822553 +0000 UTC m=+155.766510217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.827729 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.828166 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.328130848 +0000 UTC m=+155.867818512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:16 crc kubenswrapper[4869]: I0106 14:02:16.929883 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:16 crc kubenswrapper[4869]: E0106 14:02:16.930353 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.430328687 +0000 UTC m=+155.970016421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.032319 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.032577 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.532536006 +0000 UTC m=+156.072223680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.032770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.033259 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.533236554 +0000 UTC m=+156.072924298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.133919 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.134062 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.634037745 +0000 UTC m=+156.173725409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.134363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.134824 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.634804216 +0000 UTC m=+156.174491950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.220822 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:17 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:17 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:17 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.220907 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.236506 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.236704 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.736676907 +0000 UTC m=+156.276364571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.236841 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.237351 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.737343624 +0000 UTC m=+156.277031288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.337770 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.337835 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.837780056 +0000 UTC m=+156.377467720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.338457 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.338887 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.838870665 +0000 UTC m=+156.378558329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.355154 4869 generic.go:334] "Generic (PLEG): container finished" podID="2f52f78b-eb13-45bc-bf05-d1c138781664" containerID="ac847b6b32460687045ab4180b0b84142fc56f873cc7b4a8a4f056d9c0660d3b" exitCode=0 Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.355239 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" event={"ID":"2f52f78b-eb13-45bc-bf05-d1c138781664","Type":"ContainerDied","Data":"ac847b6b32460687045ab4180b0b84142fc56f873cc7b4a8a4f056d9c0660d3b"} Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.356910 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" event={"ID":"93c78ab4-fc39-46e0-9135-146854d02c0f","Type":"ContainerStarted","Data":"4464b161d3e98e4e80aaa30e762e27419ca4a6c90bf10c78565160f7d20ea152"} Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.358874 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" event={"ID":"25bd3d1b-ff4a-4369-af67-dea3889d9db3","Type":"ContainerStarted","Data":"bf0fe2cb6622cf85ea2107558b3f92f5d07609b0bca43977112dd66bdbd06149"} Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.361576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" event={"ID":"e8156833-621a-414d-9aab-83b8bceb2d09","Type":"ContainerStarted","Data":"22a437a4d7a65efccb787e62e198b3e08abfca4562e2640a79a4adb1f9f9c8c7"} Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.361828 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.363566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hdc42" event={"ID":"9c265449-14f8-4b89-b50c-7889b5d41c64","Type":"ContainerStarted","Data":"5beec16ac15370fd36eebb09cabc78637d570e8973111f23b6ae2f17c56acd4d"} Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.366469 4869 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-h6xlw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.366816 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.377256 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nkqd" podStartSLOduration=137.377233769 podStartE2EDuration="2m17.377233769s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:16.623261508 +0000 UTC m=+155.162949182" watchObservedRunningTime="2026-01-06 14:02:17.377233769 +0000 UTC m=+155.916921433" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.383079 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pb4p6" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.448901 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.450511 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-vg6sr" podStartSLOduration=137.450492376 podStartE2EDuration="2m17.450492376s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:17.409156532 +0000 UTC m=+155.948844196" watchObservedRunningTime="2026-01-06 14:02:17.450492376 +0000 UTC m=+155.990180040" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.452615 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:17.952589491 +0000 UTC m=+156.492277165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.453160 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hdc42" podStartSLOduration=11.453151516 podStartE2EDuration="11.453151516s" podCreationTimestamp="2026-01-06 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:17.448386389 +0000 UTC m=+155.988074053" watchObservedRunningTime="2026-01-06 14:02:17.453151516 +0000 UTC m=+155.992839180" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.501153 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-qxbrk" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.535789 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" podStartSLOduration=137.535761002 podStartE2EDuration="2m17.535761002s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:17.495863117 +0000 UTC m=+156.035550771" watchObservedRunningTime="2026-01-06 14:02:17.535761002 +0000 UTC m=+156.075448666" Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.552423 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.552755 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.052742786 +0000 UTC m=+156.592430450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.653414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.653731 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.153686371 +0000 UTC m=+156.693374025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.653813 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.654233 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.154213035 +0000 UTC m=+156.693900899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.755679 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.756423 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.256404454 +0000 UTC m=+156.796092118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.857961 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.858437 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.358415978 +0000 UTC m=+156.898103712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.964318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.964543 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.464508461 +0000 UTC m=+157.004196125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:17 crc kubenswrapper[4869]: I0106 14:02:17.964598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:17 crc kubenswrapper[4869]: E0106 14:02:17.965314 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.465304091 +0000 UTC m=+157.004991755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.038995 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.039892 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.044453 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.044944 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.065658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.065826 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.565793725 +0000 UTC m=+157.105481389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.065891 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.065926 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.065997 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.066328 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.566313228 +0000 UTC m=+157.106000892 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.108650 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.167905 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.168145 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.668095546 +0000 UTC m=+157.207783210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.168217 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.168287 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.168367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.206543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.222033 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:18 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:18 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:18 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.222105 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.269986 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.270379 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.770363197 +0000 UTC m=+157.310050861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.331078 4869 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.364719 4869 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-vs269 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.364809 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" podUID="6bd88edc-2d9d-4456-8cbd-812d024b4ed6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.371304 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.371528 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.871471706 +0000 UTC m=+157.411159370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.372082 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.372378 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.87236434 +0000 UTC m=+157.412051994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.374606 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.374645 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.374653 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.374681 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.375875 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.385220 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" event={"ID":"93c78ab4-fc39-46e0-9135-146854d02c0f","Type":"ContainerStarted","Data":"e26f44a6ec7c47eb22da18f18ba4f52e26ceda4993787d4247095d2b9078477c"} Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.386305 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.473900 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.474098 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.974068966 +0000 UTC m=+157.513756630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.477072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.477428 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:18.977410035 +0000 UTC m=+157.517097749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.478638 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.478723 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.492466 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.513546 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-vs269" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.577782 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.577927 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:19.077906119 +0000 UTC m=+157.617593793 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.578185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.578833 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:19.078803822 +0000 UTC m=+157.618491666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.649221 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.649617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.677989 4869 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qr849 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]log ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]etcd ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/generic-apiserver-start-informers ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/max-in-flight-filter ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 06 14:02:18 crc kubenswrapper[4869]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectcache ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-startinformers ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 06 14:02:18 crc kubenswrapper[4869]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 06 14:02:18 crc kubenswrapper[4869]: livez check failed Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.678111 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qr849" podUID="ab544c1b-884d-47a9-9e75-b133b58ca4db" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.679141 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.681084 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:19.181063682 +0000 UTC m=+157.720751366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.718874 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.718919 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.750490 4869 patch_prober.go:28] interesting pod/console-f9d7485db-b9gld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.750553 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b9gld" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.798758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.801200 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-06 14:02:19.30117795 +0000 UTC m=+157.840865614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5jk5b" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.806512 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.826895 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.827007 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.837105 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.885767 4869 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-06T14:02:18.331141799Z","Handler":null,"Name":""} Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.900258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.900434 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.900485 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.900529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64ll4\" (UniqueName: \"kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.900717 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-06 14:02:19.400699897 +0000 UTC m=+157.940387561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.923487 4869 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.923547 4869 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.930412 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.983597 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:02:18 crc kubenswrapper[4869]: I0106 14:02:18.984844 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:18 crc kubenswrapper[4869]: W0106 14:02:18.986594 4869 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 06 14:02:18 crc kubenswrapper[4869]: E0106 14:02:18.986644 4869 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.009633 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.009727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.009784 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.009814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64ll4\" (UniqueName: \"kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.010748 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.011061 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.027303 4869 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.027393 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.051437 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.065786 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.068325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64ll4\" (UniqueName: \"kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4\") pod \"community-operators-szfbw\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.096051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5jk5b\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111423 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111614 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume\") pod \"2f52f78b-eb13-45bc-bf05-d1c138781664\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume\") pod \"2f52f78b-eb13-45bc-bf05-d1c138781664\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111711 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxq59\" (UniqueName: \"kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59\") pod \"2f52f78b-eb13-45bc-bf05-d1c138781664\" (UID: \"2f52f78b-eb13-45bc-bf05-d1c138781664\") " Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111952 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.111997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.112061 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2kp\" (UniqueName: \"kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.112541 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f52f78b-eb13-45bc-bf05-d1c138781664" (UID: "2f52f78b-eb13-45bc-bf05-d1c138781664"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.117478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f52f78b-eb13-45bc-bf05-d1c138781664" (UID: "2f52f78b-eb13-45bc-bf05-d1c138781664"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.118559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59" (OuterVolumeSpecName: "kube-api-access-rxq59") pod "2f52f78b-eb13-45bc-bf05-d1c138781664" (UID: "2f52f78b-eb13-45bc-bf05-d1c138781664"). InnerVolumeSpecName "kube-api-access-rxq59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.128727 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.147620 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.185845 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:02:19 crc kubenswrapper[4869]: E0106 14:02:19.186398 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f52f78b-eb13-45bc-bf05-d1c138781664" containerName="collect-profiles" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.186477 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f52f78b-eb13-45bc-bf05-d1c138781664" containerName="collect-profiles" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.186706 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f52f78b-eb13-45bc-bf05-d1c138781664" containerName="collect-profiles" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.187794 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213249 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc2kp\" (UniqueName: \"kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213633 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f52f78b-eb13-45bc-bf05-d1c138781664-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213652 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f52f78b-eb13-45bc-bf05-d1c138781664-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.213691 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxq59\" (UniqueName: \"kubernetes.io/projected/2f52f78b-eb13-45bc-bf05-d1c138781664-kube-api-access-rxq59\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.215072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.215187 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.215347 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.215448 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.221888 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:19 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:19 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:19 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.222533 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.243378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc2kp\" (UniqueName: \"kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp\") pod \"certified-operators-j8wrz\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.314593 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.314680 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxzhb\" (UniqueName: \"kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.314760 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.384638 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.389856 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.417184 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.417306 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.417360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxzhb\" (UniqueName: \"kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.418262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.418930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.420898 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.447042 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" event={"ID":"2f52f78b-eb13-45bc-bf05-d1c138781664","Type":"ContainerDied","Data":"cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9"} Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.447089 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfaa1d036cf464fd5f814f32a29027f26201df9e63ec93820e010e86293d79b9" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.447156 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.448062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxzhb\" (UniqueName: \"kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb\") pod \"community-operators-z66qd\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.467806 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" event={"ID":"93c78ab4-fc39-46e0-9135-146854d02c0f","Type":"ContainerStarted","Data":"cf8adaa6e958d1ddcdd3eac46ee424ef6bf521358e901e85bfa8ece1f0e4e2f5"} Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.467859 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" event={"ID":"93c78ab4-fc39-46e0-9135-146854d02c0f","Type":"ContainerStarted","Data":"5ba2b65b95dabe74f07dc5eb809bd45a4507f81c612e1b13d58b48ebd447c6b3"} Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.477797 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73dcccf9-165b-4be1-b27d-8d97f1db34ad","Type":"ContainerStarted","Data":"ff9d6ae2f5d858eae73df85a03510ef5d3a9f64112eb1e9e29ce2bb4dfa3d28f"} Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.484538 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wzhmf" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.516170 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.518203 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.518246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.518427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r7z9\" (UniqueName: \"kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.539228 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.624236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.626135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.626161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.626216 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r7z9\" (UniqueName: \"kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.626950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.627208 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.659719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r7z9\" (UniqueName: \"kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9\") pod \"certified-operators-2l76t\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.688340 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:02:19 crc kubenswrapper[4869]: I0106 14:02:19.755357 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.120416 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.124240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.130854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.148241 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.224076 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:20 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:20 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:20 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.224142 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.484852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerStarted","Data":"111920e8c184034a4153b04b225cad1c32cd3b112b4ffdb909291f6447f76f62"} Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.486234 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" event={"ID":"15c48694-481d-4ac5-80cc-e153ca5fb1d1","Type":"ContainerStarted","Data":"235531a2fba0d2132229e03d0b19943743e3186477f936fc405ffdbd0441ca44"} Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.487281 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerStarted","Data":"461ac75ee940c77c17a43e441cd1e836cbd79297e6086fcf9071aa2d8be1450c"} Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.517818 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pnb6r" podStartSLOduration=14.517795725 podStartE2EDuration="14.517795725s" podCreationTimestamp="2026-01-06 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:20.510996683 +0000 UTC m=+159.050684347" watchObservedRunningTime="2026-01-06 14:02:20.517795725 +0000 UTC m=+159.057483389" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.522360 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:02:20 crc kubenswrapper[4869]: W0106 14:02:20.528347 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3073b84_85aa_4f76_9ade_5e52abfc7cf7.slice/crio-76803b1d55e22dbcb4b217c5a76ca6878852df375cca5023aee66959bf08c0ea WatchSource:0}: Error finding container 76803b1d55e22dbcb4b217c5a76ca6878852df375cca5023aee66959bf08c0ea: Status 404 returned error can't find the container with id 76803b1d55e22dbcb4b217c5a76ca6878852df375cca5023aee66959bf08c0ea Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.576519 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.577931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.585124 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.590339 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.623999 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.667578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nhxl\" (UniqueName: \"kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.667721 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.667768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.769601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nhxl\" (UniqueName: \"kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.769689 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.769719 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.770627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.770914 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.777855 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.779179 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.793478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.803816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nhxl\" (UniqueName: \"kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl\") pod \"redhat-marketplace-z5xn5\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.871822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtncz\" (UniqueName: \"kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.872050 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.872094 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.921299 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.973278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtncz\" (UniqueName: \"kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.973385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.973430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.974126 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:20 crc kubenswrapper[4869]: I0106 14:02:20.974431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.034107 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtncz\" (UniqueName: \"kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz\") pod \"redhat-marketplace-s2lrj\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.111128 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.227686 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:21 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:21 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:21 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.227757 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.327161 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.442769 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.495115 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerID="b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee" exitCode=0 Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.495196 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerDied","Data":"b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.495698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerStarted","Data":"76803b1d55e22dbcb4b217c5a76ca6878852df375cca5023aee66959bf08c0ea"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.497523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73dcccf9-165b-4be1-b27d-8d97f1db34ad","Type":"ContainerStarted","Data":"9e68b391b7877d7995868f3c485721c09576f2dbf008119d22ea7396fada940d"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.499235 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerStarted","Data":"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.499269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerStarted","Data":"988839591f62b86907af44d40e625b91c1c6bcba2a7b4763c149fdff6ae7990e"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.500997 4869 generic.go:334] "Generic (PLEG): container finished" podID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerID="9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda" exitCode=0 Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.501090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerDied","Data":"9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.502862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" event={"ID":"15c48694-481d-4ac5-80cc-e153ca5fb1d1","Type":"ContainerStarted","Data":"7bfa91b7521c7a7b7874422e4113029d9f8af933ddaa55db42a4950ab4bce8e6"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.503064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.504457 4869 generic.go:334] "Generic (PLEG): container finished" podID="8cb04313-66de-451d-bf22-a91c11cf497a" containerID="e306b8559baa780154cf52ad3fc3dd14d6698ed2ba26d9864ad47afabf896455" exitCode=0 Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.504495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerDied","Data":"e306b8559baa780154cf52ad3fc3dd14d6698ed2ba26d9864ad47afabf896455"} Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.517655 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.517632462 podStartE2EDuration="3.517632462s" podCreationTimestamp="2026-01-06 14:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:21.517269312 +0000 UTC m=+160.056956976" watchObservedRunningTime="2026-01-06 14:02:21.517632462 +0000 UTC m=+160.057320126" Jan 06 14:02:21 crc kubenswrapper[4869]: I0106 14:02:21.539242 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" podStartSLOduration=141.539205078 podStartE2EDuration="2m21.539205078s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:21.539040734 +0000 UTC m=+160.078728418" watchObservedRunningTime="2026-01-06 14:02:21.539205078 +0000 UTC m=+160.078892762" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.220040 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:22 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:22 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:22 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.220436 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.512474 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerID="13223ec013b581d60b87457420b2ed44f3b4c945ce604f3dc88d3c8df9fbbed6" exitCode=0 Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.512565 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerDied","Data":"13223ec013b581d60b87457420b2ed44f3b4c945ce604f3dc88d3c8df9fbbed6"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.512613 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerStarted","Data":"4b9293abeab2f2922cb74ab4466b621b0295ff17ee5734661c66e0578266ae55"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.519349 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.529467 4869 generic.go:334] "Generic (PLEG): container finished" podID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerID="3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8" exitCode=0 Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.529619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerDied","Data":"3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.533703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerStarted","Data":"c9262523dc8d6508b40ba9b8c2d5ea678a05fe21cc800ed26e1d18efd9e4a67a"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.570286 4869 generic.go:334] "Generic (PLEG): container finished" podID="73dcccf9-165b-4be1-b27d-8d97f1db34ad" containerID="9e68b391b7877d7995868f3c485721c09576f2dbf008119d22ea7396fada940d" exitCode=0 Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.570465 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73dcccf9-165b-4be1-b27d-8d97f1db34ad","Type":"ContainerDied","Data":"9e68b391b7877d7995868f3c485721c09576f2dbf008119d22ea7396fada940d"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.598823 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerID="586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1" exitCode=0 Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.600619 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerDied","Data":"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1"} Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.601054 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.603416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.610706 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.621211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.647526 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.677214 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b86d961d-74c0-40cb-912d-ae0db79d97f2-metrics-certs\") pod \"network-metrics-daemon-mmdq4\" (UID: \"b86d961d-74c0-40cb-912d-ae0db79d97f2\") " pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.728184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mmdq4" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.749110 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.749185 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.749256 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc2v9\" (UniqueName: \"kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.850553 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.850657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.850806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc2v9\" (UniqueName: \"kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.851941 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.852215 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.873629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc2v9\" (UniqueName: \"kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9\") pod \"redhat-operators-cbszs\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.930434 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.953399 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.955401 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.956060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.960362 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 06 14:02:22 crc kubenswrapper[4869]: I0106 14:02:22.960834 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.004275 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.005912 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.040804 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.055051 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.055383 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.140998 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mmdq4"] Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.156706 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.156774 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.156820 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.156885 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.156916 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59n4\" (UniqueName: \"kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.157063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.179265 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.234872 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:23 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:23 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:23 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.234978 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.258730 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.258840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.258863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59n4\" (UniqueName: \"kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.259790 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.260198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.277727 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59n4\" (UniqueName: \"kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4\") pod \"redhat-operators-ct92x\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.307968 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.337636 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.340049 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:02:23 crc kubenswrapper[4869]: W0106 14:02:23.371790 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0f4d25c_95bf_4bcd_b4a7_eb8344871cce.slice/crio-88ee309e80291399cebd109e8cc4075a8755b8a1e894c7c34194078865267821 WatchSource:0}: Error finding container 88ee309e80291399cebd109e8cc4075a8755b8a1e894c7c34194078865267821: Status 404 returned error can't find the container with id 88ee309e80291399cebd109e8cc4075a8755b8a1e894c7c34194078865267821 Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.624049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerStarted","Data":"88ee309e80291399cebd109e8cc4075a8755b8a1e894c7c34194078865267821"} Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.626509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" event={"ID":"b86d961d-74c0-40cb-912d-ae0db79d97f2","Type":"ContainerStarted","Data":"9c2acce9c58570d79c32166c725bc3766f0a7df605c180e7fdb5667ef326784d"} Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.659587 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.674871 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qr849" Jan 06 14:02:23 crc kubenswrapper[4869]: I0106 14:02:23.869772 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.075102 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:02:24 crc kubenswrapper[4869]: W0106 14:02:24.122218 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddff049ab_f2f2_47b0_ad0d_28a5977bd953.slice/crio-c08dde5728de9b89c77451beb869225a452fa7d748ea22dd3b9f278185f86432 WatchSource:0}: Error finding container c08dde5728de9b89c77451beb869225a452fa7d748ea22dd3b9f278185f86432: Status 404 returned error can't find the container with id c08dde5728de9b89c77451beb869225a452fa7d748ea22dd3b9f278185f86432 Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.221419 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:24 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:24 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:24 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.221875 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.362043 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.504811 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir\") pod \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.504913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73dcccf9-165b-4be1-b27d-8d97f1db34ad" (UID: "73dcccf9-165b-4be1-b27d-8d97f1db34ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.505191 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access\") pod \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\" (UID: \"73dcccf9-165b-4be1-b27d-8d97f1db34ad\") " Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.506240 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.524557 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73dcccf9-165b-4be1-b27d-8d97f1db34ad" (UID: "73dcccf9-165b-4be1-b27d-8d97f1db34ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.608860 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73dcccf9-165b-4be1-b27d-8d97f1db34ad-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.654977 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerID="bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234" exitCode=0 Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.656055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerDied","Data":"bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.680355 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" event={"ID":"b86d961d-74c0-40cb-912d-ae0db79d97f2","Type":"ContainerStarted","Data":"d6ae2df0832763741375e5ddd30f8d851e9167c2943ec02e39ecdca76fb194ae"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.680417 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mmdq4" event={"ID":"b86d961d-74c0-40cb-912d-ae0db79d97f2","Type":"ContainerStarted","Data":"3de86809fc65fb447e4aa2b2f5f45d2f1471cf7f23e25103b831c5947961bc31"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.685123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerStarted","Data":"c08dde5728de9b89c77451beb869225a452fa7d748ea22dd3b9f278185f86432"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.712426 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mmdq4" podStartSLOduration=144.712404955 podStartE2EDuration="2m24.712404955s" podCreationTimestamp="2026-01-06 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:24.708937913 +0000 UTC m=+163.248625597" watchObservedRunningTime="2026-01-06 14:02:24.712404955 +0000 UTC m=+163.252092639" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.716072 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6f812ab2-dcdf-48dd-986d-6bb5f6db9234","Type":"ContainerStarted","Data":"877bba47ea8994e2a45201b0f679dcdd8a2ac2c1637fabc4b7e48894fcb7f58c"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.722157 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.722145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"73dcccf9-165b-4be1-b27d-8d97f1db34ad","Type":"ContainerDied","Data":"ff9d6ae2f5d858eae73df85a03510ef5d3a9f64112eb1e9e29ce2bb4dfa3d28f"} Jan 06 14:02:24 crc kubenswrapper[4869]: I0106 14:02:24.722273 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff9d6ae2f5d858eae73df85a03510ef5d3a9f64112eb1e9e29ce2bb4dfa3d28f" Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.219619 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:25 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:25 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:25 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.220167 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.749901 4869 generic.go:334] "Generic (PLEG): container finished" podID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerID="7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc" exitCode=0 Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.749998 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerDied","Data":"7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc"} Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.759876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6f812ab2-dcdf-48dd-986d-6bb5f6db9234","Type":"ContainerStarted","Data":"70ccff1fb01b39ac83b855dd053023da813ea7e7834da9c60b0eb5c3da3da572"} Jan 06 14:02:25 crc kubenswrapper[4869]: I0106 14:02:25.790842 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.79081798 podStartE2EDuration="3.79081798s" podCreationTimestamp="2026-01-06 14:02:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:02:25.789301919 +0000 UTC m=+164.328989583" watchObservedRunningTime="2026-01-06 14:02:25.79081798 +0000 UTC m=+164.330505644" Jan 06 14:02:26 crc kubenswrapper[4869]: I0106 14:02:26.223059 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:26 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:26 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:26 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:26 crc kubenswrapper[4869]: I0106 14:02:26.223138 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:26 crc kubenswrapper[4869]: I0106 14:02:26.785756 4869 generic.go:334] "Generic (PLEG): container finished" podID="6f812ab2-dcdf-48dd-986d-6bb5f6db9234" containerID="70ccff1fb01b39ac83b855dd053023da813ea7e7834da9c60b0eb5c3da3da572" exitCode=0 Jan 06 14:02:26 crc kubenswrapper[4869]: I0106 14:02:26.785821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6f812ab2-dcdf-48dd-986d-6bb5f6db9234","Type":"ContainerDied","Data":"70ccff1fb01b39ac83b855dd053023da813ea7e7834da9c60b0eb5c3da3da572"} Jan 06 14:02:27 crc kubenswrapper[4869]: I0106 14:02:27.218955 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:27 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:27 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:27 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:27 crc kubenswrapper[4869]: I0106 14:02:27.219051 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:27 crc kubenswrapper[4869]: I0106 14:02:27.746128 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hdc42" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.142516 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.219389 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:28 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:28 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:28 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.219443 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.313629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access\") pod \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.313784 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir\") pod \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\" (UID: \"6f812ab2-dcdf-48dd-986d-6bb5f6db9234\") " Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.314177 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6f812ab2-dcdf-48dd-986d-6bb5f6db9234" (UID: "6f812ab2-dcdf-48dd-986d-6bb5f6db9234"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.329656 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6f812ab2-dcdf-48dd-986d-6bb5f6db9234" (UID: "6f812ab2-dcdf-48dd-986d-6bb5f6db9234"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.380825 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.380884 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.381279 4869 patch_prober.go:28] interesting pod/downloads-7954f5f757-vx9gs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.381342 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-vx9gs" podUID="f1d294f9-a755-49bc-bc10-5b4e9739a914" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.421122 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.421164 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6f812ab2-dcdf-48dd-986d-6bb5f6db9234-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.719587 4869 patch_prober.go:28] interesting pod/console-f9d7485db-b9gld container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.719658 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b9gld" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.830924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6f812ab2-dcdf-48dd-986d-6bb5f6db9234","Type":"ContainerDied","Data":"877bba47ea8994e2a45201b0f679dcdd8a2ac2c1637fabc4b7e48894fcb7f58c"} Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.830974 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877bba47ea8994e2a45201b0f679dcdd8a2ac2c1637fabc4b7e48894fcb7f58c" Jan 06 14:02:28 crc kubenswrapper[4869]: I0106 14:02:28.831076 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 06 14:02:29 crc kubenswrapper[4869]: I0106 14:02:29.218021 4869 patch_prober.go:28] interesting pod/router-default-5444994796-4sgbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 06 14:02:29 crc kubenswrapper[4869]: [-]has-synced failed: reason withheld Jan 06 14:02:29 crc kubenswrapper[4869]: [+]process-running ok Jan 06 14:02:29 crc kubenswrapper[4869]: healthz check failed Jan 06 14:02:29 crc kubenswrapper[4869]: I0106 14:02:29.218088 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4sgbs" podUID="538d7a4a-0270-4948-a67f-69f1d297f371" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 06 14:02:30 crc kubenswrapper[4869]: I0106 14:02:30.342922 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:30 crc kubenswrapper[4869]: I0106 14:02:30.353038 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4sgbs" Jan 06 14:02:33 crc kubenswrapper[4869]: I0106 14:02:33.625042 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:02:33 crc kubenswrapper[4869]: I0106 14:02:33.625999 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:02:38 crc kubenswrapper[4869]: I0106 14:02:38.382302 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-vx9gs" Jan 06 14:02:38 crc kubenswrapper[4869]: I0106 14:02:38.724602 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:38 crc kubenswrapper[4869]: I0106 14:02:38.728355 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:02:39 crc kubenswrapper[4869]: I0106 14:02:39.135920 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:02:49 crc kubenswrapper[4869]: I0106 14:02:49.689814 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9kkzq" Jan 06 14:02:50 crc kubenswrapper[4869]: I0106 14:02:50.358341 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.129092 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 06 14:02:58 crc kubenswrapper[4869]: E0106 14:02:58.134968 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f812ab2-dcdf-48dd-986d-6bb5f6db9234" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.134999 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f812ab2-dcdf-48dd-986d-6bb5f6db9234" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: E0106 14:02:58.135028 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dcccf9-165b-4be1-b27d-8d97f1db34ad" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.135041 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dcccf9-165b-4be1-b27d-8d97f1db34ad" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.135230 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f812ab2-dcdf-48dd-986d-6bb5f6db9234" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.135271 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dcccf9-165b-4be1-b27d-8d97f1db34ad" containerName="pruner" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.136105 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.142754 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.181373 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.181923 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.299083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.299159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.400502 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.400555 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.400650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.436385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:02:58 crc kubenswrapper[4869]: I0106 14:02:58.506003 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.542716 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.543728 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.543815 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.582540 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.582602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.582689 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.622279 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.622344 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.622395 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.623698 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.623805 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0" gracePeriod=600 Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.683958 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.684042 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.684365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.684433 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.684823 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.710612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access\") pod \"installer-9-crc\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:03 crc kubenswrapper[4869]: I0106 14:03:03.872407 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:05 crc kubenswrapper[4869]: E0106 14:03:05.469502 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 06 14:03:05 crc kubenswrapper[4869]: E0106 14:03:05.469737 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wxzhb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-z66qd_openshift-marketplace(8cb04313-66de-451d-bf22-a91c11cf497a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:05 crc kubenswrapper[4869]: E0106 14:03:05.471275 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-z66qd" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" Jan 06 14:03:06 crc kubenswrapper[4869]: I0106 14:03:06.134132 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0" exitCode=0 Jan 06 14:03:06 crc kubenswrapper[4869]: I0106 14:03:06.134196 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0"} Jan 06 14:03:11 crc kubenswrapper[4869]: E0106 14:03:11.887564 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 06 14:03:11 crc kubenswrapper[4869]: E0106 14:03:11.888641 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtncz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s2lrj_openshift-marketplace(bdffba93-f65c-45d9-98fe-9ea99cb13f14): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:11 crc kubenswrapper[4869]: E0106 14:03:11.890225 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s2lrj" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.336491 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s2lrj" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.443966 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.444193 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59n4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ct92x_openshift-marketplace(dff049ab-f2f2-47b0-ad0d-28a5977bd953): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.446217 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ct92x" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.463370 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.463602 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vc2v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cbszs_openshift-marketplace(c0f4d25c-95bf-4bcd-b4a7-eb8344871cce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.465266 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cbszs" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.569653 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.569880 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nhxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z5xn5_openshift-marketplace(c590ed4f-a46e-4826-beac-2d353aab75e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.571004 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-z5xn5" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.682791 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.682977 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ll4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-szfbw_openshift-marketplace(1a2b8334-967b-4600-954a-db3f0bd2cd80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:16 crc kubenswrapper[4869]: E0106 14:03:16.684131 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-szfbw" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.420895 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ct92x" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.421174 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cbszs" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.421456 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z5xn5" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.639237 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.639688 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dc2kp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j8wrz_openshift-marketplace(a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.641198 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j8wrz" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.664292 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.664479 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2r7z9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2l76t_openshift-marketplace(a3073b84-85aa-4f76-9ade-5e52abfc7cf7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 06 14:03:18 crc kubenswrapper[4869]: E0106 14:03:18.665712 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2l76t" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" Jan 06 14:03:18 crc kubenswrapper[4869]: I0106 14:03:18.723916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 06 14:03:18 crc kubenswrapper[4869]: I0106 14:03:18.869792 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 06 14:03:18 crc kubenswrapper[4869]: W0106 14:03:18.882070 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4cac63b3_eb7f_4f7a_9897_0d649d4df5e9.slice/crio-f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733 WatchSource:0}: Error finding container f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733: Status 404 returned error can't find the container with id f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733 Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.214771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9","Type":"ContainerStarted","Data":"5ac7c42ff14be4b9aa51b0bb3a524fa290b34c39f9d0d42a533c77193b548361"} Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.215272 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9","Type":"ContainerStarted","Data":"f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733"} Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.220027 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce"} Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.223010 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bac30697-1479-4a2f-8133-f80a7919f061","Type":"ContainerStarted","Data":"e838e9bda2d3a965ee5c7c38a213faa2a4691b88c738d98062b6dd8d499b99d7"} Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.223057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bac30697-1479-4a2f-8133-f80a7919f061","Type":"ContainerStarted","Data":"5a1cdfb33198b05ab642a21c77057e759a1d566ccf239bde0538daea1832124d"} Jan 06 14:03:19 crc kubenswrapper[4869]: E0106 14:03:19.227651 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j8wrz" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.239614 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=21.239597086 podStartE2EDuration="21.239597086s" podCreationTimestamp="2026-01-06 14:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:03:19.2358062 +0000 UTC m=+217.775493864" watchObservedRunningTime="2026-01-06 14:03:19.239597086 +0000 UTC m=+217.779284750" Jan 06 14:03:19 crc kubenswrapper[4869]: I0106 14:03:19.254813 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=16.25480083 podStartE2EDuration="16.25480083s" podCreationTimestamp="2026-01-06 14:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:03:19.251229341 +0000 UTC m=+217.790917005" watchObservedRunningTime="2026-01-06 14:03:19.25480083 +0000 UTC m=+217.794488494" Jan 06 14:03:20 crc kubenswrapper[4869]: I0106 14:03:20.235491 4869 generic.go:334] "Generic (PLEG): container finished" podID="4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" containerID="5ac7c42ff14be4b9aa51b0bb3a524fa290b34c39f9d0d42a533c77193b548361" exitCode=0 Jan 06 14:03:20 crc kubenswrapper[4869]: I0106 14:03:20.235720 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9","Type":"ContainerDied","Data":"5ac7c42ff14be4b9aa51b0bb3a524fa290b34c39f9d0d42a533c77193b548361"} Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.548485 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.677208 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir\") pod \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.677678 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access\") pod \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\" (UID: \"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9\") " Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.677363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" (UID: "4cac63b3-eb7f-4f7a-9897-0d649d4df5e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.677963 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.683776 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" (UID: "4cac63b3-eb7f-4f7a-9897-0d649d4df5e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:03:21 crc kubenswrapper[4869]: I0106 14:03:21.779708 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4cac63b3-eb7f-4f7a-9897-0d649d4df5e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:22 crc kubenswrapper[4869]: I0106 14:03:22.249890 4869 generic.go:334] "Generic (PLEG): container finished" podID="8cb04313-66de-451d-bf22-a91c11cf497a" containerID="661bd071040c6a755eae7a07b72cf8b53ce52a6d936614ffecc1c4cd9c6d41b7" exitCode=0 Jan 06 14:03:22 crc kubenswrapper[4869]: I0106 14:03:22.250002 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerDied","Data":"661bd071040c6a755eae7a07b72cf8b53ce52a6d936614ffecc1c4cd9c6d41b7"} Jan 06 14:03:22 crc kubenswrapper[4869]: I0106 14:03:22.252590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4cac63b3-eb7f-4f7a-9897-0d649d4df5e9","Type":"ContainerDied","Data":"f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733"} Jan 06 14:03:22 crc kubenswrapper[4869]: I0106 14:03:22.252655 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8b9107a80f37011196483136e599a1d560aaa44ee58fd114ffc0b1b4ea3c733" Jan 06 14:03:22 crc kubenswrapper[4869]: I0106 14:03:22.252691 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 06 14:03:23 crc kubenswrapper[4869]: I0106 14:03:23.261294 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerStarted","Data":"693a9c646450d841835bca8f0584e43d7207c98e0ab45eabe817c8db9bc98e74"} Jan 06 14:03:23 crc kubenswrapper[4869]: I0106 14:03:23.285395 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z66qd" podStartSLOduration=4.042424735 podStartE2EDuration="1m4.285372089s" podCreationTimestamp="2026-01-06 14:02:19 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.601366068 +0000 UTC m=+161.141053732" lastFinishedPulling="2026-01-06 14:03:22.844313432 +0000 UTC m=+221.384001086" observedRunningTime="2026-01-06 14:03:23.28297185 +0000 UTC m=+221.822659524" watchObservedRunningTime="2026-01-06 14:03:23.285372089 +0000 UTC m=+221.825059753" Jan 06 14:03:29 crc kubenswrapper[4869]: I0106 14:03:29.516971 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:29 crc kubenswrapper[4869]: I0106 14:03:29.517426 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:29 crc kubenswrapper[4869]: I0106 14:03:29.605930 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:30 crc kubenswrapper[4869]: I0106 14:03:30.303411 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerID="ae3328174310813235fd2409bf6325c869a39d2e66004c0bf4ae42f2d6242a71" exitCode=0 Jan 06 14:03:30 crc kubenswrapper[4869]: I0106 14:03:30.303510 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerDied","Data":"ae3328174310813235fd2409bf6325c869a39d2e66004c0bf4ae42f2d6242a71"} Jan 06 14:03:30 crc kubenswrapper[4869]: I0106 14:03:30.308289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerStarted","Data":"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391"} Jan 06 14:03:30 crc kubenswrapper[4869]: I0106 14:03:30.363969 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:31 crc kubenswrapper[4869]: I0106 14:03:31.316342 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerID="3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391" exitCode=0 Jan 06 14:03:31 crc kubenswrapper[4869]: I0106 14:03:31.316454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerDied","Data":"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391"} Jan 06 14:03:31 crc kubenswrapper[4869]: I0106 14:03:31.321186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerStarted","Data":"631533c35181a39420027b6f7475cdf98e40f7e3958b829c7387e598d772af6e"} Jan 06 14:03:31 crc kubenswrapper[4869]: I0106 14:03:31.363205 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s2lrj" podStartSLOduration=2.945500125 podStartE2EDuration="1m11.363185882s" podCreationTimestamp="2026-01-06 14:02:20 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.518911267 +0000 UTC m=+161.058598931" lastFinishedPulling="2026-01-06 14:03:30.936597024 +0000 UTC m=+229.476284688" observedRunningTime="2026-01-06 14:03:31.360965439 +0000 UTC m=+229.900653103" watchObservedRunningTime="2026-01-06 14:03:31.363185882 +0000 UTC m=+229.902873546" Jan 06 14:03:32 crc kubenswrapper[4869]: I0106 14:03:32.634543 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:03:32 crc kubenswrapper[4869]: I0106 14:03:32.635076 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z66qd" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="registry-server" containerID="cri-o://693a9c646450d841835bca8f0584e43d7207c98e0ab45eabe817c8db9bc98e74" gracePeriod=2 Jan 06 14:03:33 crc kubenswrapper[4869]: I0106 14:03:33.336799 4869 generic.go:334] "Generic (PLEG): container finished" podID="8cb04313-66de-451d-bf22-a91c11cf497a" containerID="693a9c646450d841835bca8f0584e43d7207c98e0ab45eabe817c8db9bc98e74" exitCode=0 Jan 06 14:03:33 crc kubenswrapper[4869]: I0106 14:03:33.336870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerDied","Data":"693a9c646450d841835bca8f0584e43d7207c98e0ab45eabe817c8db9bc98e74"} Jan 06 14:03:33 crc kubenswrapper[4869]: I0106 14:03:33.949134 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.064951 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities\") pod \"8cb04313-66de-451d-bf22-a91c11cf497a\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.065050 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content\") pod \"8cb04313-66de-451d-bf22-a91c11cf497a\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.065197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxzhb\" (UniqueName: \"kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb\") pod \"8cb04313-66de-451d-bf22-a91c11cf497a\" (UID: \"8cb04313-66de-451d-bf22-a91c11cf497a\") " Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.065867 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities" (OuterVolumeSpecName: "utilities") pod "8cb04313-66de-451d-bf22-a91c11cf497a" (UID: "8cb04313-66de-451d-bf22-a91c11cf497a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.072133 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb" (OuterVolumeSpecName: "kube-api-access-wxzhb") pod "8cb04313-66de-451d-bf22-a91c11cf497a" (UID: "8cb04313-66de-451d-bf22-a91c11cf497a"). InnerVolumeSpecName "kube-api-access-wxzhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.131930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8cb04313-66de-451d-bf22-a91c11cf497a" (UID: "8cb04313-66de-451d-bf22-a91c11cf497a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.167310 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxzhb\" (UniqueName: \"kubernetes.io/projected/8cb04313-66de-451d-bf22-a91c11cf497a-kube-api-access-wxzhb\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.167632 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.167817 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8cb04313-66de-451d-bf22-a91c11cf497a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.345117 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerStarted","Data":"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.348624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z66qd" event={"ID":"8cb04313-66de-451d-bf22-a91c11cf497a","Type":"ContainerDied","Data":"461ac75ee940c77c17a43e441cd1e836cbd79297e6086fcf9071aa2d8be1450c"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.348801 4869 scope.go:117] "RemoveContainer" containerID="693a9c646450d841835bca8f0584e43d7207c98e0ab45eabe817c8db9bc98e74" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.348692 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z66qd" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.352297 4869 generic.go:334] "Generic (PLEG): container finished" podID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerID="784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a" exitCode=0 Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.352366 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerDied","Data":"784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.356197 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerID="1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7" exitCode=0 Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.356330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerDied","Data":"1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.366054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerStarted","Data":"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.369434 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerStarted","Data":"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c"} Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.378924 4869 scope.go:117] "RemoveContainer" containerID="661bd071040c6a755eae7a07b72cf8b53ce52a6d936614ffecc1c4cd9c6d41b7" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.410885 4869 scope.go:117] "RemoveContainer" containerID="e306b8559baa780154cf52ad3fc3dd14d6698ed2ba26d9864ad47afabf896455" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.448992 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbszs" podStartSLOduration=3.804772524 podStartE2EDuration="1m12.448972502s" podCreationTimestamp="2026-01-06 14:02:22 +0000 UTC" firstStartedPulling="2026-01-06 14:02:24.660839878 +0000 UTC m=+163.200527542" lastFinishedPulling="2026-01-06 14:03:33.305039866 +0000 UTC m=+231.844727520" observedRunningTime="2026-01-06 14:03:34.431497703 +0000 UTC m=+232.971185367" watchObservedRunningTime="2026-01-06 14:03:34.448972502 +0000 UTC m=+232.988660166" Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.452393 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:03:34 crc kubenswrapper[4869]: I0106 14:03:34.455196 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z66qd"] Jan 06 14:03:35 crc kubenswrapper[4869]: I0106 14:03:35.711759 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" path="/var/lib/kubelet/pods/8cb04313-66de-451d-bf22-a91c11cf497a/volumes" Jan 06 14:03:36 crc kubenswrapper[4869]: I0106 14:03:36.385730 4869 generic.go:334] "Generic (PLEG): container finished" podID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerID="14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c" exitCode=0 Jan 06 14:03:36 crc kubenswrapper[4869]: I0106 14:03:36.385782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerDied","Data":"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c"} Jan 06 14:03:38 crc kubenswrapper[4869]: I0106 14:03:38.400489 4869 generic.go:334] "Generic (PLEG): container finished" podID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerID="f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf" exitCode=0 Jan 06 14:03:38 crc kubenswrapper[4869]: I0106 14:03:38.400588 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerDied","Data":"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf"} Jan 06 14:03:38 crc kubenswrapper[4869]: I0106 14:03:38.531442 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qmjgl"] Jan 06 14:03:41 crc kubenswrapper[4869]: I0106 14:03:41.123472 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:41 crc kubenswrapper[4869]: I0106 14:03:41.124078 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:41 crc kubenswrapper[4869]: I0106 14:03:41.170633 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:41 crc kubenswrapper[4869]: I0106 14:03:41.470959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:42 crc kubenswrapper[4869]: I0106 14:03:42.642028 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:03:42 crc kubenswrapper[4869]: I0106 14:03:42.931594 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:03:42 crc kubenswrapper[4869]: I0106 14:03:42.931648 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:03:42 crc kubenswrapper[4869]: I0106 14:03:42.971560 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:03:43 crc kubenswrapper[4869]: I0106 14:03:43.438290 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s2lrj" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="registry-server" containerID="cri-o://631533c35181a39420027b6f7475cdf98e40f7e3958b829c7387e598d772af6e" gracePeriod=2 Jan 06 14:03:43 crc kubenswrapper[4869]: I0106 14:03:43.504731 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:03:47 crc kubenswrapper[4869]: I0106 14:03:47.460336 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerID="631533c35181a39420027b6f7475cdf98e40f7e3958b829c7387e598d772af6e" exitCode=0 Jan 06 14:03:47 crc kubenswrapper[4869]: I0106 14:03:47.460416 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerDied","Data":"631533c35181a39420027b6f7475cdf98e40f7e3958b829c7387e598d772af6e"} Jan 06 14:03:49 crc kubenswrapper[4869]: I0106 14:03:49.771138 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.028138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtncz\" (UniqueName: \"kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz\") pod \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.028202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content\") pod \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.028230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities\") pod \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\" (UID: \"bdffba93-f65c-45d9-98fe-9ea99cb13f14\") " Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.029350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities" (OuterVolumeSpecName: "utilities") pod "bdffba93-f65c-45d9-98fe-9ea99cb13f14" (UID: "bdffba93-f65c-45d9-98fe-9ea99cb13f14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.034694 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz" (OuterVolumeSpecName: "kube-api-access-wtncz") pod "bdffba93-f65c-45d9-98fe-9ea99cb13f14" (UID: "bdffba93-f65c-45d9-98fe-9ea99cb13f14"). InnerVolumeSpecName "kube-api-access-wtncz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.051204 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdffba93-f65c-45d9-98fe-9ea99cb13f14" (UID: "bdffba93-f65c-45d9-98fe-9ea99cb13f14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.129515 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtncz\" (UniqueName: \"kubernetes.io/projected/bdffba93-f65c-45d9-98fe-9ea99cb13f14-kube-api-access-wtncz\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.129554 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.129565 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdffba93-f65c-45d9-98fe-9ea99cb13f14-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.479638 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s2lrj" event={"ID":"bdffba93-f65c-45d9-98fe-9ea99cb13f14","Type":"ContainerDied","Data":"4b9293abeab2f2922cb74ab4466b621b0295ff17ee5734661c66e0578266ae55"} Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.479727 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s2lrj" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.479987 4869 scope.go:117] "RemoveContainer" containerID="631533c35181a39420027b6f7475cdf98e40f7e3958b829c7387e598d772af6e" Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.507208 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:03:50 crc kubenswrapper[4869]: I0106 14:03:50.512989 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s2lrj"] Jan 06 14:03:51 crc kubenswrapper[4869]: I0106 14:03:51.715520 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" path="/var/lib/kubelet/pods/bdffba93-f65c-45d9-98fe-9ea99cb13f14/volumes" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.211776 4869 scope.go:117] "RemoveContainer" containerID="ae3328174310813235fd2409bf6325c869a39d2e66004c0bf4ae42f2d6242a71" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.698530 4869 scope.go:117] "RemoveContainer" containerID="13223ec013b581d60b87457420b2ed44f3b4c945ce604f3dc88d3c8df9fbbed6" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.797176 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.798207 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.798417 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed" gracePeriod=15 Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.798640 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d" gracePeriod=15 Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.798807 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7" gracePeriod=15 Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.798925 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f" gracePeriod=15 Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799045 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a" gracePeriod=15 Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.798835 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799143 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799156 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799163 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799174 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" containerName="pruner" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799180 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" containerName="pruner" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799197 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799206 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="extract-content" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799215 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="extract-content" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799223 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799228 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799238 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="extract-utilities" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799245 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="extract-utilities" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799252 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799258 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799266 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="extract-utilities" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799271 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="extract-utilities" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799280 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799286 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799295 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799302 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799311 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799317 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799326 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799332 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799341 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="extract-content" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799347 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="extract-content" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.799355 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799360 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799496 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799515 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799524 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cac63b3-eb7f-4f7a-9897-0d649d4df5e9" containerName="pruner" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799533 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799542 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799552 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb04313-66de-451d-bf22-a91c11cf497a" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799562 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799572 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdffba93-f65c-45d9-98fe-9ea99cb13f14" containerName="registry-server" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799582 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.799856 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.801021 4869 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.801512 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.805285 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.913062 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936790 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936850 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936874 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936899 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936928 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936949 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.936971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: I0106 14:03:56.937042 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:56 crc kubenswrapper[4869]: E0106 14:03:56.959367 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ct92x.188829469a75b2fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ct92x,UID:dff049ab-f2f2-47b0-ad0d-28a5977bd953,APIVersion:v1,ResourceVersion:28747,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,LastTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038397 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038483 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038515 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038587 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038641 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.038895 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039052 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039092 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039107 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039121 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.039136 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.214684 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.445132 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ct92x.188829469a75b2fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ct92x,UID:dff049ab-f2f2-47b0-ad0d-28a5977bd953,APIVersion:v1,ResourceVersion:28747,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,LastTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.526248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.526299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a257db3b736e631a98acbc2e7e828624f54a4ca08fe855e913a389fa74060e71"} Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.527016 4869 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.529706 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.530794 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.531364 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d" exitCode=0 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.531390 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7" exitCode=0 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.531397 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a" exitCode=0 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.531405 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f" exitCode=2 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.531470 4869 scope.go:117] "RemoveContainer" containerID="e95554d05c91878648fac26a67ebcc1efb107d78447db70fbf5a7c2c392461d1" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.533960 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerStarted","Data":"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.534805 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.535863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerStarted","Data":"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.536799 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.537031 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.537097 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerID="1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978" exitCode=0 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.537154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerDied","Data":"1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.538306 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.538808 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.539083 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.539512 4869 generic.go:334] "Generic (PLEG): container finished" podID="bac30697-1479-4a2f-8133-f80a7919f061" containerID="e838e9bda2d3a965ee5c7c38a213faa2a4691b88c738d98062b6dd8d499b99d7" exitCode=0 Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.539543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bac30697-1479-4a2f-8133-f80a7919f061","Type":"ContainerDied","Data":"e838e9bda2d3a965ee5c7c38a213faa2a4691b88c738d98062b6dd8d499b99d7"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.540720 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.541096 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.541361 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.542452 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.546088 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerStarted","Data":"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.546741 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.547011 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.547354 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.547781 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.548000 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.550140 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerStarted","Data":"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea"} Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.551115 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.551553 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.551853 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.552158 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.552392 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.552607 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.629022 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.629319 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.629525 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.629763 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.630006 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:57 crc kubenswrapper[4869]: I0106 14:03:57.630047 4869 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.630299 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="200ms" Jan 06 14:03:57 crc kubenswrapper[4869]: E0106 14:03:57.831698 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="400ms" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.232740 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="800ms" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.400563 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:03:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:03:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:03:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:03:58Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4ce15258595e88a4613e55b5002340f1143608530d748c8b36870dd2a4b6ae62\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:701def604428296b90a5337fdea5e4cda84c79e8afe0f67090cdd205e6476aa4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1655667031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14b3ee92c08a9bf28d563142e30580743af07af884dfc84ab348a5a7beacffa0\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6d021c1f0f84f1c5c2f7f66eb7508856040394b5be754fc8c6debc66644368b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1231959908},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:19c4662e9863b5adfd8dafb67cee3dc6d84c3e0230f73df2e278f70a40e66ea2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:301d8a7121856c0a045e2171455b80a60da368013cca7906f1ce3c4de2ca9858\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203987286},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.401117 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.401341 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.401642 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.401977 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: E0106 14:03:58.401998 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.559251 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerStarted","Data":"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28"} Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.560096 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.560520 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.560762 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.560964 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.561250 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.561424 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.562598 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.910206 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.912530 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.915763 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.916148 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.916329 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.916493 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.916723 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.963516 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access\") pod \"bac30697-1479-4a2f-8133-f80a7919f061\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.963864 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir\") pod \"bac30697-1479-4a2f-8133-f80a7919f061\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.963993 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock\") pod \"bac30697-1479-4a2f-8133-f80a7919f061\" (UID: \"bac30697-1479-4a2f-8133-f80a7919f061\") " Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.964284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock" (OuterVolumeSpecName: "var-lock") pod "bac30697-1479-4a2f-8133-f80a7919f061" (UID: "bac30697-1479-4a2f-8133-f80a7919f061"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.964762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bac30697-1479-4a2f-8133-f80a7919f061" (UID: "bac30697-1479-4a2f-8133-f80a7919f061"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:58 crc kubenswrapper[4869]: I0106 14:03:58.970974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bac30697-1479-4a2f-8133-f80a7919f061" (UID: "bac30697-1479-4a2f-8133-f80a7919f061"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.033317 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="1.6s" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.065291 4869 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.065573 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bac30697-1479-4a2f-8133-f80a7919f061-var-lock\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.065896 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac30697-1479-4a2f-8133-f80a7919f061-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.188111 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.189148 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.189841 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.190197 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.190463 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.190766 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.191141 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.191372 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.191600 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.214914 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.215250 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.265402 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.266087 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.266503 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.266815 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.266992 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267134 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267267 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267416 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267629 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267646 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267860 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267872 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.267936 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.369103 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.369142 4869 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.369152 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.574760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bac30697-1479-4a2f-8133-f80a7919f061","Type":"ContainerDied","Data":"5a1cdfb33198b05ab642a21c77057e759a1d566ccf239bde0538daea1832124d"} Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.574813 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a1cdfb33198b05ab642a21c77057e759a1d566ccf239bde0538daea1832124d" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.574884 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.582867 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.584281 4869 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed" exitCode=0 Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.584417 4869 scope.go:117] "RemoveContainer" containerID="91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.584466 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.596101 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.596390 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.596820 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.596974 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.597120 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.597271 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.597433 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.603949 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.604699 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.605221 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.605507 4869 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.605843 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.606143 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.606485 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.612248 4869 scope.go:117] "RemoveContainer" containerID="2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.632735 4869 scope.go:117] "RemoveContainer" containerID="6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.646566 4869 scope.go:117] "RemoveContainer" containerID="d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.658678 4869 scope.go:117] "RemoveContainer" containerID="7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.675467 4869 scope.go:117] "RemoveContainer" containerID="1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.695769 4869 scope.go:117] "RemoveContainer" containerID="91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.698173 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\": container with ID starting with 91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d not found: ID does not exist" containerID="91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.698207 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d"} err="failed to get container status \"91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\": rpc error: code = NotFound desc = could not find container \"91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d\": container with ID starting with 91d16eed89288e8c6eae9044e50fbc67439c4fa3efb024013f8ea4cee5b4ed5d not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.698233 4869 scope.go:117] "RemoveContainer" containerID="2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.698963 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\": container with ID starting with 2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7 not found: ID does not exist" containerID="2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.699028 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7"} err="failed to get container status \"2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\": rpc error: code = NotFound desc = could not find container \"2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7\": container with ID starting with 2512b67ee9af29e29b953bbc0c026a39e572643d3f3655d80a399d73e5933fc7 not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.699067 4869 scope.go:117] "RemoveContainer" containerID="6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.699597 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\": container with ID starting with 6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a not found: ID does not exist" containerID="6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.699630 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a"} err="failed to get container status \"6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\": rpc error: code = NotFound desc = could not find container \"6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a\": container with ID starting with 6eabca604134a03d7228923c32af4481b950ed4768c34c2d548fa11829377e5a not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.699646 4869 scope.go:117] "RemoveContainer" containerID="d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.700094 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\": container with ID starting with d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f not found: ID does not exist" containerID="d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.700121 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f"} err="failed to get container status \"d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\": rpc error: code = NotFound desc = could not find container \"d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f\": container with ID starting with d6da5d74ae19ac54a22daed7e108c9acf85c7bf51cfd1e90b4a9033866ebea7f not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.700136 4869 scope.go:117] "RemoveContainer" containerID="7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.700532 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\": container with ID starting with 7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed not found: ID does not exist" containerID="7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.700582 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed"} err="failed to get container status \"7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\": rpc error: code = NotFound desc = could not find container \"7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed\": container with ID starting with 7e1d9b34a9bd6c301a0e25a0108b19179a816276a491195828f0694ac309f7ed not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.700611 4869 scope.go:117] "RemoveContainer" containerID="1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6" Jan 06 14:03:59 crc kubenswrapper[4869]: E0106 14:03:59.703010 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\": container with ID starting with 1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6 not found: ID does not exist" containerID="1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.703039 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6"} err="failed to get container status \"1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\": rpc error: code = NotFound desc = could not find container \"1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6\": container with ID starting with 1d67ac40e9d288306081832f8f7fcfd7597b3894145a2d8796b12267b80495d6 not found: ID does not exist" Jan 06 14:03:59 crc kubenswrapper[4869]: I0106 14:03:59.710475 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.125292 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.125355 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.131979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.132043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.167002 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.167608 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.168047 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.168291 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.168546 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.168824 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.169093 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.175027 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.175459 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.175740 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.175972 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.176196 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.176492 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.176821 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: E0106 14:04:00.634461 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="3.2s" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.922534 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.922601 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.986625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.987582 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.988503 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.989271 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.989854 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.990397 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:00 crc kubenswrapper[4869]: I0106 14:04:00.990967 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.644069 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.644724 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.645205 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.645639 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.646887 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.647222 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.647583 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.707733 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.708917 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.709370 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.709806 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.710088 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:01 crc kubenswrapper[4869]: I0106 14:04:01.710394 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.341095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.341432 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.378522 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.379005 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.379459 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.379972 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.380274 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.380715 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.381040 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.561682 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" containerID="cri-o://e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef" gracePeriod=15 Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.679567 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.680257 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.680758 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.681275 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.681541 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.681920 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: I0106 14:04:03.682223 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:03 crc kubenswrapper[4869]: E0106 14:04:03.835731 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="6.4s" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.471035 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.471953 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.472491 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.473252 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.473529 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.473855 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.474169 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.474418 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535405 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535519 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535630 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535709 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535736 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535765 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535904 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535947 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.535985 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjpd5\" (UniqueName: \"kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536020 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig\") pod \"58ee4883-a1a6-425c-b079-059119125791\" (UID: \"58ee4883-a1a6-425c-b079-059119125791\") " Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536235 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536434 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536455 4869 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536514 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.536981 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.537164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.541333 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.541987 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.542842 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.542929 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5" (OuterVolumeSpecName: "kube-api-access-fjpd5") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "kube-api-access-fjpd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.543000 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.543117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.543411 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.543622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.543798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "58ee4883-a1a6-425c-b079-059119125791" (UID: "58ee4883-a1a6-425c-b079-059119125791"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.618431 4869 generic.go:334] "Generic (PLEG): container finished" podID="58ee4883-a1a6-425c-b079-059119125791" containerID="e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef" exitCode=0 Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.618500 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.618524 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" event={"ID":"58ee4883-a1a6-425c-b079-059119125791","Type":"ContainerDied","Data":"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef"} Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.618576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" event={"ID":"58ee4883-a1a6-425c-b079-059119125791","Type":"ContainerDied","Data":"fe0f54845059a9f59607629a9b180cb561427bce9e27e5d20e35878fd811d277"} Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.618595 4869 scope.go:117] "RemoveContainer" containerID="e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.619138 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.619532 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.619980 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.620427 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.620905 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.621224 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.621520 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.631882 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.632430 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.632893 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.633393 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.633877 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.634342 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.634832 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637330 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637377 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637396 4869 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/58ee4883-a1a6-425c-b079-059119125791-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637413 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637430 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637450 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637468 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637486 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjpd5\" (UniqueName: \"kubernetes.io/projected/58ee4883-a1a6-425c-b079-059119125791-kube-api-access-fjpd5\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637500 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637514 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637529 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637548 4869 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/58ee4883-a1a6-425c-b079-059119125791-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.637970 4869 scope.go:117] "RemoveContainer" containerID="e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef" Jan 06 14:04:04 crc kubenswrapper[4869]: E0106 14:04:04.638446 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef\": container with ID starting with e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef not found: ID does not exist" containerID="e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef" Jan 06 14:04:04 crc kubenswrapper[4869]: I0106 14:04:04.638492 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef"} err="failed to get container status \"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef\": rpc error: code = NotFound desc = could not find container \"e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef\": container with ID starting with e967663143c7b011fb0e68592291a60283998246b654a59fd7ffd81c792f0fef not found: ID does not exist" Jan 06 14:04:07 crc kubenswrapper[4869]: E0106 14:04:07.447003 4869 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-ct92x.188829469a75b2fd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ct92x,UID:dff049ab-f2f2-47b0-ad0d-28a5977bd953,APIVersion:v1,ResourceVersion:28747,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,LastTimestamp:2026-01-06 14:03:56.958774013 +0000 UTC m=+255.498461677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.436282 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:04:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:04:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:04:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:04:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4ce15258595e88a4613e55b5002340f1143608530d748c8b36870dd2a4b6ae62\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:701def604428296b90a5337fdea5e4cda84c79e8afe0f67090cdd205e6476aa4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1655667031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:14b3ee92c08a9bf28d563142e30580743af07af884dfc84ab348a5a7beacffa0\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6d021c1f0f84f1c5c2f7f66eb7508856040394b5be754fc8c6debc66644368b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1231959908},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:19c4662e9863b5adfd8dafb67cee3dc6d84c3e0230f73df2e278f70a40e66ea2\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:301d8a7121856c0a045e2171455b80a60da368013cca7906f1ce3c4de2ca9858\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1203987286},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.437404 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.437624 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.438051 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.438536 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:08 crc kubenswrapper[4869]: E0106 14:04:08.438564 4869 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.281545 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.282575 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.283208 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.284008 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.284776 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.285279 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.285751 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.286279 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.703990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.704584 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.705137 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.705597 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.705994 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.706314 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.706609 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.706936 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.719390 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.719439 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:09 crc kubenswrapper[4869]: E0106 14:04:09.719965 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:09 crc kubenswrapper[4869]: I0106 14:04:09.720439 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.173314 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.174534 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.174812 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.175041 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.175256 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.175502 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.175877 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.176101 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.190380 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.191433 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.191979 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.192230 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.192421 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.192597 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.192827 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.193079 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: E0106 14:04:10.237812 4869 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="7s" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.652637 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.652712 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" exitCode=1 Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.652771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8"} Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.653500 4869 scope.go:117] "RemoveContainer" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.653630 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.653991 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"37fcee9c88087666bf405823e7a0a3026993f8f60408d1fedcee412acbb1a69b"} Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654007 4869 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="37fcee9c88087666bf405823e7a0a3026993f8f60408d1fedcee412acbb1a69b" exitCode=0 Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654196 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654209 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b4b438e68a545d6e922f299e1be1af24d6c609e9c440d4a4ee0f0ac706b6c95e"} Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654294 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: E0106 14:04:10.654525 4869 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.654749 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.655259 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.655509 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.655902 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.656257 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.656629 4869 status_manager.go:851] "Failed to get status for pod" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" pod="openshift-marketplace/community-operators-szfbw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-szfbw\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.656960 4869 status_manager.go:851] "Failed to get status for pod" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" pod="openshift-marketplace/redhat-operators-ct92x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-ct92x\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.657336 4869 status_manager.go:851] "Failed to get status for pod" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" pod="openshift-marketplace/certified-operators-j8wrz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-j8wrz\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.657654 4869 status_manager.go:851] "Failed to get status for pod" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" pod="openshift-marketplace/certified-operators-2l76t" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2l76t\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.658034 4869 status_manager.go:851] "Failed to get status for pod" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" pod="openshift-marketplace/redhat-marketplace-z5xn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-z5xn5\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.658247 4869 status_manager.go:851] "Failed to get status for pod" podUID="58ee4883-a1a6-425c-b079-059119125791" pod="openshift-authentication/oauth-openshift-558db77b4-qmjgl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-qmjgl\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.658423 4869 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:10 crc kubenswrapper[4869]: I0106 14:04:10.658694 4869 status_manager.go:851] "Failed to get status for pod" podUID="bac30697-1479-4a2f-8133-f80a7919f061" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.456293 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.684554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bbd74d74984ca43bc912bd2b784437d7f5ef947ce3b1429b03f72d43bf829df8"} Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.684601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"837eb42c520767d64ebb4efe44eb6ec316ce7f4609c3f38457d9fa0ded4ae45f"} Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.684611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0ad4067944a6a26bffa35a5926d06fd5d7940b5b7164f46bd9d170a3fb47e81c"} Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.684620 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"213b2d8a99626d8ab43839f7677ca11e97c74f4cdfe6d2a7b809dd74e67c6e37"} Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.695350 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 06 14:04:11 crc kubenswrapper[4869]: I0106 14:04:11.695428 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e51d67536462a19489ffde362436cceddb8498c573094ae66f23aaac02e955a"} Jan 06 14:04:12 crc kubenswrapper[4869]: I0106 14:04:12.704867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"db954627e4cd4884d3d526f611816671964e9e5a196017d8391e2e47afeb5d4e"} Jan 06 14:04:12 crc kubenswrapper[4869]: I0106 14:04:12.705469 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:12 crc kubenswrapper[4869]: I0106 14:04:12.705368 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:12 crc kubenswrapper[4869]: I0106 14:04:12.705507 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:13 crc kubenswrapper[4869]: I0106 14:04:13.092692 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:04:13 crc kubenswrapper[4869]: I0106 14:04:13.097836 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:04:13 crc kubenswrapper[4869]: I0106 14:04:13.711263 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:04:14 crc kubenswrapper[4869]: I0106 14:04:14.720655 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:14 crc kubenswrapper[4869]: I0106 14:04:14.720745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:14 crc kubenswrapper[4869]: I0106 14:04:14.728105 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:17 crc kubenswrapper[4869]: I0106 14:04:17.715166 4869 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:17 crc kubenswrapper[4869]: I0106 14:04:17.734362 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:17 crc kubenswrapper[4869]: I0106 14:04:17.734411 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:17 crc kubenswrapper[4869]: I0106 14:04:17.744124 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:17 crc kubenswrapper[4869]: I0106 14:04:17.748545 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a1e5151f-f181-42b0-b993-9253c31dc7e8" Jan 06 14:04:18 crc kubenswrapper[4869]: I0106 14:04:18.739094 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:18 crc kubenswrapper[4869]: I0106 14:04:18.739125 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:21 crc kubenswrapper[4869]: I0106 14:04:21.461142 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:04:21 crc kubenswrapper[4869]: I0106 14:04:21.731222 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="a1e5151f-f181-42b0-b993-9253c31dc7e8" Jan 06 14:04:26 crc kubenswrapper[4869]: I0106 14:04:26.913608 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 06 14:04:27 crc kubenswrapper[4869]: I0106 14:04:27.681535 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 06 14:04:28 crc kubenswrapper[4869]: I0106 14:04:28.206574 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 06 14:04:28 crc kubenswrapper[4869]: I0106 14:04:28.657891 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 06 14:04:29 crc kubenswrapper[4869]: I0106 14:04:29.233616 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 06 14:04:29 crc kubenswrapper[4869]: I0106 14:04:29.336291 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 06 14:04:29 crc kubenswrapper[4869]: I0106 14:04:29.410740 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 06 14:04:29 crc kubenswrapper[4869]: I0106 14:04:29.471497 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 06 14:04:29 crc kubenswrapper[4869]: I0106 14:04:29.935686 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.016093 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.052374 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.117492 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.257085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.261017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.358395 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.482277 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.507587 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.512833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.568368 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.682896 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.759795 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.778063 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.781787 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.830590 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.863222 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 06 14:04:30 crc kubenswrapper[4869]: I0106 14:04:30.906361 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.132228 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.279264 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.400912 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.407123 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.539807 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.547491 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.614124 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.672448 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.682403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.905117 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 06 14:04:31 crc kubenswrapper[4869]: I0106 14:04:31.974732 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.000181 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.011980 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.044215 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.056033 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.078855 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.112972 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.135451 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.216726 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.305566 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.372194 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.389462 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.443261 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.458472 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.553809 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.625393 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.786502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.806185 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.841592 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.939140 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 06 14:04:32 crc kubenswrapper[4869]: I0106 14:04:32.985565 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.110502 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.146416 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.174829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.199349 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.292494 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.302697 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.403721 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.408833 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.416512 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.482784 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.568703 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.641814 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.719369 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.795605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.850199 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.917824 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 06 14:04:33 crc kubenswrapper[4869]: I0106 14:04:33.947626 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.041829 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.087227 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.155282 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.210867 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.258850 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.273580 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.322636 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.339434 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.357616 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.497389 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.581149 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.681273 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.697750 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.826397 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.881150 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 06 14:04:34 crc kubenswrapper[4869]: I0106 14:04:34.998776 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.087542 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.147767 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.249647 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.286880 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.352289 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.557526 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.578789 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.734351 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.778221 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.814855 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.833394 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.878992 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 06 14:04:35 crc kubenswrapper[4869]: I0106 14:04:35.914484 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.004778 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.006492 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.006639 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.034463 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.078837 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.132197 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.254219 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.366021 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.380619 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.537253 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.572876 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.645957 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.687267 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.705826 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.730596 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.743002 4869 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.788623 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.851048 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.857045 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.861990 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.868069 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 06 14:04:36 crc kubenswrapper[4869]: I0106 14:04:36.878240 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.095788 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.141465 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.156983 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.165609 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.311104 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.400472 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.411372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.474165 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.477942 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.513543 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.516342 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.541088 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.747262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.908521 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 06 14:04:37 crc kubenswrapper[4869]: I0106 14:04:37.964820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.103916 4869 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.106029 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5xn5" podStartSLOduration=45.600288087 podStartE2EDuration="2m18.106005139s" podCreationTimestamp="2026-01-06 14:02:20 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.544114919 +0000 UTC m=+161.083802583" lastFinishedPulling="2026-01-06 14:03:55.049831971 +0000 UTC m=+253.589519635" observedRunningTime="2026-01-06 14:04:17.472384288 +0000 UTC m=+276.012071952" watchObservedRunningTime="2026-01-06 14:04:38.106005139 +0000 UTC m=+296.645692823" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.106338 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2l76t" podStartSLOduration=45.502061794 podStartE2EDuration="2m19.106331267s" podCreationTimestamp="2026-01-06 14:02:19 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.607847842 +0000 UTC m=+161.147535506" lastFinishedPulling="2026-01-06 14:03:56.212117315 +0000 UTC m=+254.751804979" observedRunningTime="2026-01-06 14:04:17.519453595 +0000 UTC m=+276.059141279" watchObservedRunningTime="2026-01-06 14:04:38.106331267 +0000 UTC m=+296.646018931" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.107816 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szfbw" podStartSLOduration=45.966806486 podStartE2EDuration="2m20.107773595s" podCreationTimestamp="2026-01-06 14:02:18 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.601559813 +0000 UTC m=+161.141247477" lastFinishedPulling="2026-01-06 14:03:56.742526922 +0000 UTC m=+255.282214586" observedRunningTime="2026-01-06 14:04:17.620828495 +0000 UTC m=+276.160516149" watchObservedRunningTime="2026-01-06 14:04:38.107773595 +0000 UTC m=+296.647461289" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.111833 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ct92x" podStartSLOduration=45.142543734 podStartE2EDuration="2m16.108365621s" podCreationTimestamp="2026-01-06 14:02:22 +0000 UTC" firstStartedPulling="2026-01-06 14:02:25.75634072 +0000 UTC m=+164.296028384" lastFinishedPulling="2026-01-06 14:03:56.722162607 +0000 UTC m=+255.261850271" observedRunningTime="2026-01-06 14:04:17.637258599 +0000 UTC m=+276.176946283" watchObservedRunningTime="2026-01-06 14:04:38.108365621 +0000 UTC m=+296.648053335" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.115934 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8wrz" podStartSLOduration=44.71390514 podStartE2EDuration="2m20.115907025s" podCreationTimestamp="2026-01-06 14:02:18 +0000 UTC" firstStartedPulling="2026-01-06 14:02:22.601953595 +0000 UTC m=+161.141641259" lastFinishedPulling="2026-01-06 14:03:58.00395548 +0000 UTC m=+256.543643144" observedRunningTime="2026-01-06 14:04:17.492899139 +0000 UTC m=+276.032586813" watchObservedRunningTime="2026-01-06 14:04:38.115907025 +0000 UTC m=+296.655594689" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117017 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-qmjgl"] Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117088 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7687c8778f-hmsll"] Jan 06 14:04:38 crc kubenswrapper[4869]: E0106 14:04:38.117416 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac30697-1479-4a2f-8133-f80a7919f061" containerName="installer" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117437 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac30697-1479-4a2f-8133-f80a7919f061" containerName="installer" Jan 06 14:04:38 crc kubenswrapper[4869]: E0106 14:04:38.117460 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117469 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117657 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac30697-1479-4a2f-8133-f80a7919f061" containerName="installer" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.117843 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ee4883-a1a6-425c-b079-059119125791" containerName="oauth-openshift" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.118233 4869 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.118272 4869 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="049f0484-d635-4877-9fdb-16aa6a1970d2" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.118554 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.122075 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.124734 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.124996 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.125103 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.125300 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.126173 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.126228 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.126683 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.126951 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.127039 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.127710 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.127766 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.130789 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.135789 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.136003 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.140957 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.144965 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.166921 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.166899774 podStartE2EDuration="21.166899774s" podCreationTimestamp="2026-01-06 14:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:04:38.162197861 +0000 UTC m=+296.701885525" watchObservedRunningTime="2026-01-06 14:04:38.166899774 +0000 UTC m=+296.706587438" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.182767 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.193087 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.237841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238211 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptw2w\" (UniqueName: \"kubernetes.io/projected/85b5ab25-b979-4446-9070-28c5ee663955-kube-api-access-ptw2w\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85b5ab25-b979-4446-9070-28c5ee663955-audit-dir\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-audit-policies\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.238906 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.239756 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.241832 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.303875 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340571 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptw2w\" (UniqueName: \"kubernetes.io/projected/85b5ab25-b979-4446-9070-28c5ee663955-kube-api-access-ptw2w\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340694 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85b5ab25-b979-4446-9070-28c5ee663955-audit-dir\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340762 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-audit-policies\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340866 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340885 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340906 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.340939 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.341566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.341898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85b5ab25-b979-4446-9070-28c5ee663955-audit-dir\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.342540 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.342627 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-audit-policies\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.343538 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.347521 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.348708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.348935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.349595 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.355113 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.355441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.357932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.361470 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85b5ab25-b979-4446-9070-28c5ee663955-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.366599 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.367324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptw2w\" (UniqueName: \"kubernetes.io/projected/85b5ab25-b979-4446-9070-28c5ee663955-kube-api-access-ptw2w\") pod \"oauth-openshift-7687c8778f-hmsll\" (UID: \"85b5ab25-b979-4446-9070-28c5ee663955\") " pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.450439 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.500175 4869 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.517418 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.570146 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.580364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.581013 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.665824 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.796943 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.855312 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.896539 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.955567 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 06 14:04:38 crc kubenswrapper[4869]: I0106 14:04:38.962361 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.064626 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.115982 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.205840 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.288139 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.664879 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.665300 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.667901 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.672237 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.672902 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.673293 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.675242 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.675322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.675438 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.685451 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.717223 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ee4883-a1a6-425c-b079-059119125791" path="/var/lib/kubelet/pods/58ee4883-a1a6-425c-b079-059119125791/volumes" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.739706 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.768102 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.792973 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.908151 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.914904 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.925266 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 06 14:04:39 crc kubenswrapper[4869]: I0106 14:04:39.975597 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.035194 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.040484 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.046054 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.051870 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.115021 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7687c8778f-hmsll"] Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.154772 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.189986 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.190298 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.222337 4869 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.222623 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9" gracePeriod=5 Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.243603 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.288335 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.288375 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.288768 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.289036 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.379105 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.410352 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.417189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7687c8778f-hmsll"] Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.432488 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.443693 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.498473 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.525262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.656213 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.718978 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.763867 4869 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.839147 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.894041 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.908508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" event={"ID":"85b5ab25-b979-4446-9070-28c5ee663955","Type":"ContainerStarted","Data":"7e931a7a78ccfdebec280335a2b5de0ed128a2c2d1bfb55c3ec118b3b5583db0"} Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.908559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" event={"ID":"85b5ab25-b979-4446-9070-28c5ee663955","Type":"ContainerStarted","Data":"9ecab2b1da46824f16a0eacb8c4f156c0d52b9f63f9248eb8e96faaf8d6a98d4"} Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.912326 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.938323 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" podStartSLOduration=62.938302661 podStartE2EDuration="1m2.938302661s" podCreationTimestamp="2026-01-06 14:03:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:04:40.933201569 +0000 UTC m=+299.472889233" watchObservedRunningTime="2026-01-06 14:04:40.938302661 +0000 UTC m=+299.477990325" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.973096 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 06 14:04:40 crc kubenswrapper[4869]: I0106 14:04:40.988502 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.051164 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.066899 4869 patch_prober.go:28] interesting pod/oauth-openshift-7687c8778f-hmsll container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:43626->10.217.0.56:6443: read: connection reset by peer" start-of-body= Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.066989 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" podUID="85b5ab25-b979-4446-9070-28c5ee663955" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": read tcp 10.217.0.2:43626->10.217.0.56:6443: read: connection reset by peer" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.082775 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.223649 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.225252 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.253746 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.316750 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.337908 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.391112 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.555977 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.573249 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.581621 4869 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.595943 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.810433 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.906441 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.937918 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.946310 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7687c8778f-hmsll_85b5ab25-b979-4446-9070-28c5ee663955/oauth-openshift/0.log" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.946493 4869 generic.go:334] "Generic (PLEG): container finished" podID="85b5ab25-b979-4446-9070-28c5ee663955" containerID="7e931a7a78ccfdebec280335a2b5de0ed128a2c2d1bfb55c3ec118b3b5583db0" exitCode=255 Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.946574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" event={"ID":"85b5ab25-b979-4446-9070-28c5ee663955","Type":"ContainerDied","Data":"7e931a7a78ccfdebec280335a2b5de0ed128a2c2d1bfb55c3ec118b3b5583db0"} Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.947597 4869 scope.go:117] "RemoveContainer" containerID="7e931a7a78ccfdebec280335a2b5de0ed128a2c2d1bfb55c3ec118b3b5583db0" Jan 06 14:04:41 crc kubenswrapper[4869]: I0106 14:04:41.983358 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.081021 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.181997 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.254859 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.574455 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.911062 4869 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.948207 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.955948 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7687c8778f-hmsll_85b5ab25-b979-4446-9070-28c5ee663955/oauth-openshift/0.log" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.956040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" event={"ID":"85b5ab25-b979-4446-9070-28c5ee663955","Type":"ContainerStarted","Data":"c8c04d8906acefb65133991b7cd2e45c2527a719e25fe9cab86dc9e82a4c0aef"} Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.956736 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:42 crc kubenswrapper[4869]: I0106 14:04:42.963059 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7687c8778f-hmsll" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.025831 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.063494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.203477 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.330408 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.499846 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.538017 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.567905 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 06 14:04:43 crc kubenswrapper[4869]: I0106 14:04:43.722938 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 06 14:04:44 crc kubenswrapper[4869]: I0106 14:04:44.009069 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 06 14:04:44 crc kubenswrapper[4869]: I0106 14:04:44.124101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 06 14:04:44 crc kubenswrapper[4869]: I0106 14:04:44.367074 4869 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 06 14:04:44 crc kubenswrapper[4869]: I0106 14:04:44.882244 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.811783 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.812276 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972739 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972794 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972890 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972949 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.972981 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973077 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973209 4869 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973226 4869 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973239 4869 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.973250 4869 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.977120 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.977275 4869 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9" exitCode=137 Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.977349 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.977355 4869 scope.go:117] "RemoveContainer" containerID="c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9" Jan 06 14:04:45 crc kubenswrapper[4869]: I0106 14:04:45.991900 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:04:46 crc kubenswrapper[4869]: I0106 14:04:46.020655 4869 scope.go:117] "RemoveContainer" containerID="c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9" Jan 06 14:04:46 crc kubenswrapper[4869]: E0106 14:04:46.021360 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9\": container with ID starting with c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9 not found: ID does not exist" containerID="c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9" Jan 06 14:04:46 crc kubenswrapper[4869]: I0106 14:04:46.021438 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9"} err="failed to get container status \"c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9\": rpc error: code = NotFound desc = could not find container \"c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9\": container with ID starting with c8048e3d4420c692595a0aeee5415bac7e67d2ff337866961b3bb3dfc65eb6e9 not found: ID does not exist" Jan 06 14:04:46 crc kubenswrapper[4869]: I0106 14:04:46.074062 4869 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:47 crc kubenswrapper[4869]: I0106 14:04:47.732551 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.296772 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.298495 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2l76t" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="registry-server" containerID="cri-o://2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.332151 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.332888 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8wrz" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="registry-server" containerID="cri-o://c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.374609 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.375062 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szfbw" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="registry-server" containerID="cri-o://c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.382994 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.383254 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" containerID="cri-o://9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.387799 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.388227 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5xn5" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="registry-server" containerID="cri-o://2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.390961 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.391187 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbszs" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="registry-server" containerID="cri-o://f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.399318 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.399600 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ct92x" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="registry-server" containerID="cri-o://6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9" gracePeriod=30 Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.410497 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cxrrl"] Jan 06 14:04:52 crc kubenswrapper[4869]: E0106 14:04:52.410920 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.410949 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.411093 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.411806 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.418106 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cxrrl"] Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.577028 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.577114 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jfr\" (UniqueName: \"kubernetes.io/projected/604f390e-7e5d-4ec9-8d4f-ce230272186c-kube-api-access-p7jfr\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.577171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.678349 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.678863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.678916 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jfr\" (UniqueName: \"kubernetes.io/projected/604f390e-7e5d-4ec9-8d4f-ce230272186c-kube-api-access-p7jfr\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.680898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.687154 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/604f390e-7e5d-4ec9-8d4f-ce230272186c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.696515 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jfr\" (UniqueName: \"kubernetes.io/projected/604f390e-7e5d-4ec9-8d4f-ce230272186c-kube-api-access-p7jfr\") pod \"marketplace-operator-79b997595-cxrrl\" (UID: \"604f390e-7e5d-4ec9-8d4f-ce230272186c\") " pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.876355 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.884054 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.888222 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.894931 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.911069 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.914112 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.921251 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.923638 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984159 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content\") pod \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities\") pod \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content\") pod \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984294 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities\") pod \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984339 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc2kp\" (UniqueName: \"kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp\") pod \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\" (UID: \"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.984362 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r7z9\" (UniqueName: \"kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9\") pod \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\" (UID: \"a3073b84-85aa-4f76-9ade-5e52abfc7cf7\") " Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.993588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities" (OuterVolumeSpecName: "utilities") pod "a3073b84-85aa-4f76-9ade-5e52abfc7cf7" (UID: "a3073b84-85aa-4f76-9ade-5e52abfc7cf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.996974 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp" (OuterVolumeSpecName: "kube-api-access-dc2kp") pod "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" (UID: "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137"). InnerVolumeSpecName "kube-api-access-dc2kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:52 crc kubenswrapper[4869]: I0106 14:04:52.998841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9" (OuterVolumeSpecName: "kube-api-access-2r7z9") pod "a3073b84-85aa-4f76-9ade-5e52abfc7cf7" (UID: "a3073b84-85aa-4f76-9ade-5e52abfc7cf7"). InnerVolumeSpecName "kube-api-access-2r7z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.007103 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities" (OuterVolumeSpecName: "utilities") pod "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" (UID: "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.040604 4869 generic.go:334] "Generic (PLEG): container finished" podID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerID="2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.040748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerDied","Data":"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.040799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5xn5" event={"ID":"c590ed4f-a46e-4826-beac-2d353aab75e1","Type":"ContainerDied","Data":"c9262523dc8d6508b40ba9b8c2d5ea678a05fe21cc800ed26e1d18efd9e4a67a"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.040830 4869 scope.go:117] "RemoveContainer" containerID="2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.041043 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5xn5" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.049035 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerID="2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.049157 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerDied","Data":"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.049223 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2l76t" event={"ID":"a3073b84-85aa-4f76-9ade-5e52abfc7cf7","Type":"ContainerDied","Data":"76803b1d55e22dbcb4b217c5a76ca6878852df375cca5023aee66959bf08c0ea"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.049349 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2l76t" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.056049 4869 generic.go:334] "Generic (PLEG): container finished" podID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerID="9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.056191 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.056491 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" event={"ID":"e0f471c5-8336-42d0-84ff-6e85011cea0a","Type":"ContainerDied","Data":"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.056535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h6xlw" event={"ID":"e0f471c5-8336-42d0-84ff-6e85011cea0a","Type":"ContainerDied","Data":"c8e2101c3530a4a6c91eb105f8ea1ac261f37002a8c324762b9721773d2f9ebc"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.068519 4869 scope.go:117] "RemoveContainer" containerID="784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.070089 4869 generic.go:334] "Generic (PLEG): container finished" podID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerID="c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.070166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerDied","Data":"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.070201 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8wrz" event={"ID":"a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137","Type":"ContainerDied","Data":"988839591f62b86907af44d40e625b91c1c6bcba2a7b4763c149fdff6ae7990e"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.070265 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8wrz" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.074239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3073b84-85aa-4f76-9ade-5e52abfc7cf7" (UID: "a3073b84-85aa-4f76-9ade-5e52abfc7cf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.074807 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerID="f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.074890 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerDied","Data":"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.074934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbszs" event={"ID":"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce","Type":"ContainerDied","Data":"88ee309e80291399cebd109e8cc4075a8755b8a1e894c7c34194078865267821"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.074902 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbszs" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.081075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" (UID: "a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.081450 4869 generic.go:334] "Generic (PLEG): container finished" podID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerID="c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.081497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerDied","Data":"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.081542 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szfbw" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.081577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szfbw" event={"ID":"1a2b8334-967b-4600-954a-db3f0bd2cd80","Type":"ContainerDied","Data":"111920e8c184034a4153b04b225cad1c32cd3b112b4ffdb909291f6447f76f62"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.083769 4869 generic.go:334] "Generic (PLEG): container finished" podID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerID="6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9" exitCode=0 Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.083814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerDied","Data":"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.083843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ct92x" event={"ID":"dff049ab-f2f2-47b0-ad0d-28a5977bd953","Type":"ContainerDied","Data":"c08dde5728de9b89c77451beb869225a452fa7d748ea22dd3b9f278185f86432"} Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.083917 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ct92x" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.085548 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content\") pod \"1a2b8334-967b-4600-954a-db3f0bd2cd80\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.085621 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content\") pod \"c590ed4f-a46e-4826-beac-2d353aab75e1\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.085649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities\") pod \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.085774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nhxl\" (UniqueName: \"kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl\") pod \"c590ed4f-a46e-4826-beac-2d353aab75e1\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.087104 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca\") pod \"e0f471c5-8336-42d0-84ff-6e85011cea0a\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089003 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content\") pod \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities\") pod \"c590ed4f-a46e-4826-beac-2d353aab75e1\" (UID: \"c590ed4f-a46e-4826-beac-2d353aab75e1\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbqs\" (UniqueName: \"kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs\") pod \"e0f471c5-8336-42d0-84ff-6e85011cea0a\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64ll4\" (UniqueName: \"kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4\") pod \"1a2b8334-967b-4600-954a-db3f0bd2cd80\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc2v9\" (UniqueName: \"kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9\") pod \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\" (UID: \"c0f4d25c-95bf-4bcd-b4a7-eb8344871cce\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089240 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics\") pod \"e0f471c5-8336-42d0-84ff-6e85011cea0a\" (UID: \"e0f471c5-8336-42d0-84ff-6e85011cea0a\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities\") pod \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089313 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p59n4\" (UniqueName: \"kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4\") pod \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities\") pod \"1a2b8334-967b-4600-954a-db3f0bd2cd80\" (UID: \"1a2b8334-967b-4600-954a-db3f0bd2cd80\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content\") pod \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\" (UID: \"dff049ab-f2f2-47b0-ad0d-28a5977bd953\") " Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089936 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc2kp\" (UniqueName: \"kubernetes.io/projected/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-kube-api-access-dc2kp\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089963 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r7z9\" (UniqueName: \"kubernetes.io/projected/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-kube-api-access-2r7z9\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089973 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089983 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.089994 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3073b84-85aa-4f76-9ade-5e52abfc7cf7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.090003 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.091140 4869 scope.go:117] "RemoveContainer" containerID="3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.095835 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities" (OuterVolumeSpecName: "utilities") pod "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" (UID: "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.096368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e0f471c5-8336-42d0-84ff-6e85011cea0a" (UID: "e0f471c5-8336-42d0-84ff-6e85011cea0a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.096601 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities" (OuterVolumeSpecName: "utilities") pod "c590ed4f-a46e-4826-beac-2d353aab75e1" (UID: "c590ed4f-a46e-4826-beac-2d353aab75e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.099908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9" (OuterVolumeSpecName: "kube-api-access-vc2v9") pod "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" (UID: "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce"). InnerVolumeSpecName "kube-api-access-vc2v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.099938 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl" (OuterVolumeSpecName: "kube-api-access-7nhxl") pod "c590ed4f-a46e-4826-beac-2d353aab75e1" (UID: "c590ed4f-a46e-4826-beac-2d353aab75e1"). InnerVolumeSpecName "kube-api-access-7nhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.100625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs" (OuterVolumeSpecName: "kube-api-access-pjbqs") pod "e0f471c5-8336-42d0-84ff-6e85011cea0a" (UID: "e0f471c5-8336-42d0-84ff-6e85011cea0a"). InnerVolumeSpecName "kube-api-access-pjbqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.102237 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities" (OuterVolumeSpecName: "utilities") pod "dff049ab-f2f2-47b0-ad0d-28a5977bd953" (UID: "dff049ab-f2f2-47b0-ad0d-28a5977bd953"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.102854 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4" (OuterVolumeSpecName: "kube-api-access-64ll4") pod "1a2b8334-967b-4600-954a-db3f0bd2cd80" (UID: "1a2b8334-967b-4600-954a-db3f0bd2cd80"). InnerVolumeSpecName "kube-api-access-64ll4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.105403 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities" (OuterVolumeSpecName: "utilities") pod "1a2b8334-967b-4600-954a-db3f0bd2cd80" (UID: "1a2b8334-967b-4600-954a-db3f0bd2cd80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.112710 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c590ed4f-a46e-4826-beac-2d353aab75e1" (UID: "c590ed4f-a46e-4826-beac-2d353aab75e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.114472 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4" (OuterVolumeSpecName: "kube-api-access-p59n4") pod "dff049ab-f2f2-47b0-ad0d-28a5977bd953" (UID: "dff049ab-f2f2-47b0-ad0d-28a5977bd953"). InnerVolumeSpecName "kube-api-access-p59n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.125364 4869 scope.go:117] "RemoveContainer" containerID="2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.127201 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b\": container with ID starting with 2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b not found: ID does not exist" containerID="2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.127260 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b"} err="failed to get container status \"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b\": rpc error: code = NotFound desc = could not find container \"2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b\": container with ID starting with 2533aa5f3f57120e64fc88a1065174886eb09d3a111167a642eef9caf4e9349b not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.127295 4869 scope.go:117] "RemoveContainer" containerID="784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.128656 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a\": container with ID starting with 784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a not found: ID does not exist" containerID="784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.128731 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a"} err="failed to get container status \"784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a\": rpc error: code = NotFound desc = could not find container \"784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a\": container with ID starting with 784ab89566a139f289692bf04bca80070c919ebfb6596fdbc8bb7f3f8784240a not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.128768 4869 scope.go:117] "RemoveContainer" containerID="3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.128693 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e0f471c5-8336-42d0-84ff-6e85011cea0a" (UID: "e0f471c5-8336-42d0-84ff-6e85011cea0a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.129772 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8\": container with ID starting with 3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8 not found: ID does not exist" containerID="3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.129828 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8"} err="failed to get container status \"3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8\": rpc error: code = NotFound desc = could not find container \"3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8\": container with ID starting with 3bb550f140e8897e7e0fcd7bd47939ec0805147f593b8d290fe22f64a46612c8 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.129850 4869 scope.go:117] "RemoveContainer" containerID="2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.134682 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cxrrl"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.148594 4869 scope.go:117] "RemoveContainer" containerID="1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.173262 4869 scope.go:117] "RemoveContainer" containerID="b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.189587 4869 scope.go:117] "RemoveContainer" containerID="2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a2b8334-967b-4600-954a-db3f0bd2cd80" (UID: "1a2b8334-967b-4600-954a-db3f0bd2cd80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190772 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nhxl\" (UniqueName: \"kubernetes.io/projected/c590ed4f-a46e-4826-beac-2d353aab75e1-kube-api-access-7nhxl\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190804 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190814 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190825 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64ll4\" (UniqueName: \"kubernetes.io/projected/1a2b8334-967b-4600-954a-db3f0bd2cd80-kube-api-access-64ll4\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190835 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc2v9\" (UniqueName: \"kubernetes.io/projected/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-kube-api-access-vc2v9\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190846 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbqs\" (UniqueName: \"kubernetes.io/projected/e0f471c5-8336-42d0-84ff-6e85011cea0a-kube-api-access-pjbqs\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190856 4869 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e0f471c5-8336-42d0-84ff-6e85011cea0a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190866 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190874 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p59n4\" (UniqueName: \"kubernetes.io/projected/dff049ab-f2f2-47b0-ad0d-28a5977bd953-kube-api-access-p59n4\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190883 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190891 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2b8334-967b-4600-954a-db3f0bd2cd80-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190906 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c590ed4f-a46e-4826-beac-2d353aab75e1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.190915 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.191827 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea\": container with ID starting with 2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea not found: ID does not exist" containerID="2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.191880 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea"} err="failed to get container status \"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea\": rpc error: code = NotFound desc = could not find container \"2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea\": container with ID starting with 2f922aa859804868bd11abfd3def7d30c06e20401ca97ecf109a69f693814cea not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.191918 4869 scope.go:117] "RemoveContainer" containerID="1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.192360 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7\": container with ID starting with 1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7 not found: ID does not exist" containerID="1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.192406 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7"} err="failed to get container status \"1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7\": rpc error: code = NotFound desc = could not find container \"1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7\": container with ID starting with 1ae0e4c79af4e12818013e9d19f0f23be7161b916362af88db2ceddb44e422a7 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.192444 4869 scope.go:117] "RemoveContainer" containerID="b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.192901 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee\": container with ID starting with b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee not found: ID does not exist" containerID="b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.192929 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee"} err="failed to get container status \"b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee\": rpc error: code = NotFound desc = could not find container \"b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee\": container with ID starting with b1a6c6c1e735f6a46402dc6bd49a33ed53029ba163df79147423ac178ae204ee not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.192950 4869 scope.go:117] "RemoveContainer" containerID="9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.220608 4869 scope.go:117] "RemoveContainer" containerID="9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.221051 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1\": container with ID starting with 9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1 not found: ID does not exist" containerID="9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.221178 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1"} err="failed to get container status \"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1\": rpc error: code = NotFound desc = could not find container \"9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1\": container with ID starting with 9612009ac8530ea582ca9abe55fd4aeb44ac20ad73a4f2e4ea5373ae3973fec1 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.221267 4869 scope.go:117] "RemoveContainer" containerID="c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.248915 4869 scope.go:117] "RemoveContainer" containerID="1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.251959 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dff049ab-f2f2-47b0-ad0d-28a5977bd953" (UID: "dff049ab-f2f2-47b0-ad0d-28a5977bd953"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.265268 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" (UID: "c0f4d25c-95bf-4bcd-b4a7-eb8344871cce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.267718 4869 scope.go:117] "RemoveContainer" containerID="586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.285137 4869 scope.go:117] "RemoveContainer" containerID="c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.285982 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28\": container with ID starting with c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28 not found: ID does not exist" containerID="c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.286047 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28"} err="failed to get container status \"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28\": rpc error: code = NotFound desc = could not find container \"c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28\": container with ID starting with c9613543e86c1c89588bea85cc257021eab54e7e5b5a3c709e883b680cdcef28 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.286090 4869 scope.go:117] "RemoveContainer" containerID="1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.286609 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978\": container with ID starting with 1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978 not found: ID does not exist" containerID="1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.286633 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978"} err="failed to get container status \"1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978\": rpc error: code = NotFound desc = could not find container \"1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978\": container with ID starting with 1d0bf702c91fb5171731384a12b1c0a4cfffa39dc94e0b1cad54be8c8464d978 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.286648 4869 scope.go:117] "RemoveContainer" containerID="586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.287000 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1\": container with ID starting with 586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1 not found: ID does not exist" containerID="586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.287061 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1"} err="failed to get container status \"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1\": rpc error: code = NotFound desc = could not find container \"586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1\": container with ID starting with 586af9e18e36f6f57c8b2cf43e3fe619cf25ec160ed78887d7253d6de9a50ed1 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.287080 4869 scope.go:117] "RemoveContainer" containerID="f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.292569 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.292617 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dff049ab-f2f2-47b0-ad0d-28a5977bd953-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.301803 4869 scope.go:117] "RemoveContainer" containerID="3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.321001 4869 scope.go:117] "RemoveContainer" containerID="bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.341752 4869 scope.go:117] "RemoveContainer" containerID="f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.342530 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0\": container with ID starting with f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0 not found: ID does not exist" containerID="f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.342655 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0"} err="failed to get container status \"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0\": rpc error: code = NotFound desc = could not find container \"f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0\": container with ID starting with f83052014b03c289694025817f4d8b03b70b9417f78fc627f51773a2db8e71b0 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.342716 4869 scope.go:117] "RemoveContainer" containerID="3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.343288 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391\": container with ID starting with 3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391 not found: ID does not exist" containerID="3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.343408 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391"} err="failed to get container status \"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391\": rpc error: code = NotFound desc = could not find container \"3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391\": container with ID starting with 3089f71ab4726fb4b48874525c0f338fdc0eed4d037696292c7d5442e4a83391 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.343517 4869 scope.go:117] "RemoveContainer" containerID="bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.344083 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234\": container with ID starting with bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234 not found: ID does not exist" containerID="bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.344171 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234"} err="failed to get container status \"bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234\": rpc error: code = NotFound desc = could not find container \"bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234\": container with ID starting with bf0feedb54575efbbf182f7b828ca238c3e9a80021ba35374a0aaa87d9ec1234 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.344238 4869 scope.go:117] "RemoveContainer" containerID="c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.368131 4869 scope.go:117] "RemoveContainer" containerID="14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.379955 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.386809 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5xn5"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.393057 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.397215 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2l76t"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.404748 4869 scope.go:117] "RemoveContainer" containerID="9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.426303 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.436615 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h6xlw"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.449869 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.454906 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbszs"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.501104 4869 scope.go:117] "RemoveContainer" containerID="c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.501876 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe\": container with ID starting with c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe not found: ID does not exist" containerID="c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.501918 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe"} err="failed to get container status \"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe\": rpc error: code = NotFound desc = could not find container \"c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe\": container with ID starting with c7acd53e9d750773b403e1e8301f089a8429857a2c4a69c4d3added46c6d5dfe not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.501949 4869 scope.go:117] "RemoveContainer" containerID="14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.503312 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c\": container with ID starting with 14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c not found: ID does not exist" containerID="14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.503361 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c"} err="failed to get container status \"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c\": rpc error: code = NotFound desc = could not find container \"14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c\": container with ID starting with 14432d944c73407e895431d4c827906e7c515eb27e5ec432beaf80b8e58d5a5c not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.503393 4869 scope.go:117] "RemoveContainer" containerID="9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.503708 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda\": container with ID starting with 9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda not found: ID does not exist" containerID="9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.503735 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda"} err="failed to get container status \"9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda\": rpc error: code = NotFound desc = could not find container \"9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda\": container with ID starting with 9aeaef6c5ba1b097b50a50a81c76a92344a46254fe5b00e2277c22d895da2cda not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.503750 4869 scope.go:117] "RemoveContainer" containerID="6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.519939 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.527223 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8wrz"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.530526 4869 scope.go:117] "RemoveContainer" containerID="f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.537845 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.540993 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ct92x"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.552248 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.556878 4869 scope.go:117] "RemoveContainer" containerID="7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.558786 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szfbw"] Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.580497 4869 scope.go:117] "RemoveContainer" containerID="6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.584901 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9\": container with ID starting with 6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9 not found: ID does not exist" containerID="6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.584958 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9"} err="failed to get container status \"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9\": rpc error: code = NotFound desc = could not find container \"6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9\": container with ID starting with 6d4337bf98f368463e127e06184adf979f52adfbec52c1ce66bd0e12fee3fac9 not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.584995 4869 scope.go:117] "RemoveContainer" containerID="f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.585374 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf\": container with ID starting with f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf not found: ID does not exist" containerID="f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.585396 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf"} err="failed to get container status \"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf\": rpc error: code = NotFound desc = could not find container \"f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf\": container with ID starting with f7fdcad0355eb7f745d35e2a23e5a563a1a1fb0a82a483c539000b4bffef10cf not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.585409 4869 scope.go:117] "RemoveContainer" containerID="7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc" Jan 06 14:04:53 crc kubenswrapper[4869]: E0106 14:04:53.586025 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc\": container with ID starting with 7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc not found: ID does not exist" containerID="7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.586056 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc"} err="failed to get container status \"7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc\": rpc error: code = NotFound desc = could not find container \"7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc\": container with ID starting with 7d4574ccdba4f73f28aa0c065679020a6367695ec26a005034fdf05de295ecdc not found: ID does not exist" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.713601 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" path="/var/lib/kubelet/pods/1a2b8334-967b-4600-954a-db3f0bd2cd80/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.715299 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" path="/var/lib/kubelet/pods/a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.716916 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" path="/var/lib/kubelet/pods/a3073b84-85aa-4f76-9ade-5e52abfc7cf7/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.719311 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" path="/var/lib/kubelet/pods/c0f4d25c-95bf-4bcd-b4a7-eb8344871cce/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.720996 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" path="/var/lib/kubelet/pods/c590ed4f-a46e-4826-beac-2d353aab75e1/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.723549 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" path="/var/lib/kubelet/pods/dff049ab-f2f2-47b0-ad0d-28a5977bd953/volumes" Jan 06 14:04:53 crc kubenswrapper[4869]: I0106 14:04:53.724277 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" path="/var/lib/kubelet/pods/e0f471c5-8336-42d0-84ff-6e85011cea0a/volumes" Jan 06 14:04:54 crc kubenswrapper[4869]: I0106 14:04:54.104924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" event={"ID":"604f390e-7e5d-4ec9-8d4f-ce230272186c","Type":"ContainerStarted","Data":"44a9473425f6a938ff01ff59455749ee8aab56c086c03dde9145861033e5b2d0"} Jan 06 14:04:54 crc kubenswrapper[4869]: I0106 14:04:54.107203 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:54 crc kubenswrapper[4869]: I0106 14:04:54.107266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" event={"ID":"604f390e-7e5d-4ec9-8d4f-ce230272186c","Type":"ContainerStarted","Data":"f47ae4409f6d7d2ee605a97e938bb2297eefcd7b12ac4695bed3e887123c3984"} Jan 06 14:04:54 crc kubenswrapper[4869]: I0106 14:04:54.110613 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" Jan 06 14:04:54 crc kubenswrapper[4869]: I0106 14:04:54.137422 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cxrrl" podStartSLOduration=2.137386754 podStartE2EDuration="2.137386754s" podCreationTimestamp="2026-01-06 14:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:04:54.129151811 +0000 UTC m=+312.668839525" watchObservedRunningTime="2026-01-06 14:04:54.137386754 +0000 UTC m=+312.677074458" Jan 06 14:05:33 crc kubenswrapper[4869]: I0106 14:05:33.622552 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:05:33 crc kubenswrapper[4869]: I0106 14:05:33.623342 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.085705 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.086463 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" podUID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" containerName="controller-manager" containerID="cri-o://9cb06f13d0dc8fface2ee2821b8d2c560059cae31ac3e535b7e46a3f4ebc7ed9" gracePeriod=30 Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.188137 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.188382 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerName="route-controller-manager" containerID="cri-o://6062791677ae424edae5154e54750bf3ddeb28bcd76b823d4445de15b1873f88" gracePeriod=30 Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.365993 4869 generic.go:334] "Generic (PLEG): container finished" podID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" containerID="9cb06f13d0dc8fface2ee2821b8d2c560059cae31ac3e535b7e46a3f4ebc7ed9" exitCode=0 Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.366095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" event={"ID":"312bcf02-2d7a-4ac1-87fd-25b2e1e42826","Type":"ContainerDied","Data":"9cb06f13d0dc8fface2ee2821b8d2c560059cae31ac3e535b7e46a3f4ebc7ed9"} Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.367762 4869 generic.go:334] "Generic (PLEG): container finished" podID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerID="6062791677ae424edae5154e54750bf3ddeb28bcd76b823d4445de15b1873f88" exitCode=0 Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.367928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" event={"ID":"cd17fb22-d612-4949-8e94-f0aa870439d9","Type":"ContainerDied","Data":"6062791677ae424edae5154e54750bf3ddeb28bcd76b823d4445de15b1873f88"} Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.390895 4869 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-bm7df container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.391228 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.598847 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.754955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca\") pod \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755057 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles\") pod \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config\") pod \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert\") pod \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2ps4\" (UniqueName: \"kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4\") pod \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\" (UID: \"312bcf02-2d7a-4ac1-87fd-25b2e1e42826\") " Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755874 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "312bcf02-2d7a-4ac1-87fd-25b2e1e42826" (UID: "312bcf02-2d7a-4ac1-87fd-25b2e1e42826"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.755913 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca" (OuterVolumeSpecName: "client-ca") pod "312bcf02-2d7a-4ac1-87fd-25b2e1e42826" (UID: "312bcf02-2d7a-4ac1-87fd-25b2e1e42826"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.756075 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config" (OuterVolumeSpecName: "config") pod "312bcf02-2d7a-4ac1-87fd-25b2e1e42826" (UID: "312bcf02-2d7a-4ac1-87fd-25b2e1e42826"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.762805 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "312bcf02-2d7a-4ac1-87fd-25b2e1e42826" (UID: "312bcf02-2d7a-4ac1-87fd-25b2e1e42826"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.764929 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4" (OuterVolumeSpecName: "kube-api-access-w2ps4") pod "312bcf02-2d7a-4ac1-87fd-25b2e1e42826" (UID: "312bcf02-2d7a-4ac1-87fd-25b2e1e42826"). InnerVolumeSpecName "kube-api-access-w2ps4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.858681 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.858718 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.858845 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.858856 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:38 crc kubenswrapper[4869]: I0106 14:05:38.858866 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2ps4\" (UniqueName: \"kubernetes.io/projected/312bcf02-2d7a-4ac1-87fd-25b2e1e42826-kube-api-access-w2ps4\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.005881 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.135778 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136062 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" containerName="controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136079 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" containerName="controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136091 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136099 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136111 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136118 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136130 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136141 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136150 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136158 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136168 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136176 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136198 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136209 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136219 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136228 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136236 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136244 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136252 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136263 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerName="route-controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136270 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerName="route-controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136281 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136290 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136322 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136330 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136355 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136363 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136374 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136382 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136393 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136401 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136413 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136421 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136432 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136440 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="extract-utilities" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136452 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136460 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: E0106 14:05:39.136473 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136480 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="extract-content" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136596 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c590ed4f-a46e-4826-beac-2d353aab75e1" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136609 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" containerName="route-controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136619 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f471c5-8336-42d0-84ff-6e85011cea0a" containerName="marketplace-operator" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136634 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b2eda7-6444-4b4f-a3a9-2fa4e3a2e137" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136642 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0f4d25c-95bf-4bcd-b4a7-eb8344871cce" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136651 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" containerName="controller-manager" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.136679 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff049ab-f2f2-47b0-ad0d-28a5977bd953" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.137424 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2b8334-967b-4600-954a-db3f0bd2cd80" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.137436 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3073b84-85aa-4f76-9ade-5e52abfc7cf7" containerName="registry-server" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.137871 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.154484 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.162029 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config\") pod \"cd17fb22-d612-4949-8e94-f0aa870439d9\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.162091 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca\") pod \"cd17fb22-d612-4949-8e94-f0aa870439d9\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.162241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z67kw\" (UniqueName: \"kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw\") pod \"cd17fb22-d612-4949-8e94-f0aa870439d9\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.162323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert\") pod \"cd17fb22-d612-4949-8e94-f0aa870439d9\" (UID: \"cd17fb22-d612-4949-8e94-f0aa870439d9\") " Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.163857 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config" (OuterVolumeSpecName: "config") pod "cd17fb22-d612-4949-8e94-f0aa870439d9" (UID: "cd17fb22-d612-4949-8e94-f0aa870439d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.165203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd17fb22-d612-4949-8e94-f0aa870439d9" (UID: "cd17fb22-d612-4949-8e94-f0aa870439d9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.168554 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw" (OuterVolumeSpecName: "kube-api-access-z67kw") pod "cd17fb22-d612-4949-8e94-f0aa870439d9" (UID: "cd17fb22-d612-4949-8e94-f0aa870439d9"). InnerVolumeSpecName "kube-api-access-z67kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.168778 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd17fb22-d612-4949-8e94-f0aa870439d9" (UID: "cd17fb22-d612-4949-8e94-f0aa870439d9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264159 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264209 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4jq\" (UniqueName: \"kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264239 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264267 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264379 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z67kw\" (UniqueName: \"kubernetes.io/projected/cd17fb22-d612-4949-8e94-f0aa870439d9-kube-api-access-z67kw\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264395 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd17fb22-d612-4949-8e94-f0aa870439d9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264405 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.264417 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd17fb22-d612-4949-8e94-f0aa870439d9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.365315 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.365366 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4jq\" (UniqueName: \"kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.365394 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.365435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.365471 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.366424 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.366838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.367393 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.369594 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.374967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" event={"ID":"cd17fb22-d612-4949-8e94-f0aa870439d9","Type":"ContainerDied","Data":"978f2263c563b6032ede404b1e611a2c31a326ead82fe4442d97bc5f4982b10f"} Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.375211 4869 scope.go:117] "RemoveContainer" containerID="6062791677ae424edae5154e54750bf3ddeb28bcd76b823d4445de15b1873f88" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.374975 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.377084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" event={"ID":"312bcf02-2d7a-4ac1-87fd-25b2e1e42826","Type":"ContainerDied","Data":"050968efce60f61f647c22815a9e7aee6f4249a10973a9d6bf4d292c5686c1ed"} Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.377233 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dgxjm" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.383203 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4jq\" (UniqueName: \"kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq\") pod \"controller-manager-559465bd67-w26cn\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.419060 4869 scope.go:117] "RemoveContainer" containerID="9cb06f13d0dc8fface2ee2821b8d2c560059cae31ac3e535b7e46a3f4ebc7ed9" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.441889 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.445990 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bm7df"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.449276 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.456609 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.457310 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dgxjm"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.634916 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.727311 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312bcf02-2d7a-4ac1-87fd-25b2e1e42826" path="/var/lib/kubelet/pods/312bcf02-2d7a-4ac1-87fd-25b2e1e42826/volumes" Jan 06 14:05:39 crc kubenswrapper[4869]: I0106 14:05:39.728118 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd17fb22-d612-4949-8e94-f0aa870439d9" path="/var/lib/kubelet/pods/cd17fb22-d612-4949-8e94-f0aa870439d9/volumes" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.135238 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.136277 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.139803 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.140081 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.142251 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.143099 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.143111 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.148919 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.153546 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.275986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.276079 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.276107 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.276173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpl4f\" (UniqueName: \"kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.378342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.378455 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.378501 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.378618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpl4f\" (UniqueName: \"kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.380405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.382032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.385870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.388078 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" event={"ID":"686e34aa-cd19-4741-be83-7ea0ee736c40","Type":"ContainerStarted","Data":"eec171b855d5722f473a8cb6a84b857ff7af28da0bc0c6d43f5d1212bcaef94f"} Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.388258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" event={"ID":"686e34aa-cd19-4741-be83-7ea0ee736c40","Type":"ContainerStarted","Data":"7713eeea922a5fbc516b811fb367c527bbc3b0e3ab98022750ddf71ae6611015"} Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.389880 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.398557 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.402566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpl4f\" (UniqueName: \"kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f\") pod \"route-controller-manager-cdfcd5df-g2mtn\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.411150 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" podStartSLOduration=2.411123273 podStartE2EDuration="2.411123273s" podCreationTimestamp="2026-01-06 14:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:05:40.4101789 +0000 UTC m=+358.949866594" watchObservedRunningTime="2026-01-06 14:05:40.411123273 +0000 UTC m=+358.950810937" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.449349 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:40 crc kubenswrapper[4869]: I0106 14:05:40.649878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:40 crc kubenswrapper[4869]: W0106 14:05:40.650948 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3be609b8_114c_4f9a_967b_9e8c9f5adb14.slice/crio-568d7bfda47ba0a8a95126303c31ceae7304bcf70357db4ef7f41278ca641ad5 WatchSource:0}: Error finding container 568d7bfda47ba0a8a95126303c31ceae7304bcf70357db4ef7f41278ca641ad5: Status 404 returned error can't find the container with id 568d7bfda47ba0a8a95126303c31ceae7304bcf70357db4ef7f41278ca641ad5 Jan 06 14:05:41 crc kubenswrapper[4869]: I0106 14:05:41.397831 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" event={"ID":"3be609b8-114c-4f9a-967b-9e8c9f5adb14","Type":"ContainerStarted","Data":"c928326c4e46effe7b5e63accb50334ba9cac7416e75850997774d32ddfe91c3"} Jan 06 14:05:41 crc kubenswrapper[4869]: I0106 14:05:41.398231 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:41 crc kubenswrapper[4869]: I0106 14:05:41.398254 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" event={"ID":"3be609b8-114c-4f9a-967b-9e8c9f5adb14","Type":"ContainerStarted","Data":"568d7bfda47ba0a8a95126303c31ceae7304bcf70357db4ef7f41278ca641ad5"} Jan 06 14:05:41 crc kubenswrapper[4869]: I0106 14:05:41.403888 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:41 crc kubenswrapper[4869]: I0106 14:05:41.419956 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" podStartSLOduration=3.419931545 podStartE2EDuration="3.419931545s" podCreationTimestamp="2026-01-06 14:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:05:41.417814493 +0000 UTC m=+359.957502167" watchObservedRunningTime="2026-01-06 14:05:41.419931545 +0000 UTC m=+359.959619229" Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.313992 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.314469 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" podUID="686e34aa-cd19-4741-be83-7ea0ee736c40" containerName="controller-manager" containerID="cri-o://eec171b855d5722f473a8cb6a84b857ff7af28da0bc0c6d43f5d1212bcaef94f" gracePeriod=30 Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.325033 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.325241 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" podUID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" containerName="route-controller-manager" containerID="cri-o://c928326c4e46effe7b5e63accb50334ba9cac7416e75850997774d32ddfe91c3" gracePeriod=30 Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.483097 4869 generic.go:334] "Generic (PLEG): container finished" podID="686e34aa-cd19-4741-be83-7ea0ee736c40" containerID="eec171b855d5722f473a8cb6a84b857ff7af28da0bc0c6d43f5d1212bcaef94f" exitCode=0 Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.483409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" event={"ID":"686e34aa-cd19-4741-be83-7ea0ee736c40","Type":"ContainerDied","Data":"eec171b855d5722f473a8cb6a84b857ff7af28da0bc0c6d43f5d1212bcaef94f"} Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.484737 4869 generic.go:334] "Generic (PLEG): container finished" podID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" containerID="c928326c4e46effe7b5e63accb50334ba9cac7416e75850997774d32ddfe91c3" exitCode=0 Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.484759 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" event={"ID":"3be609b8-114c-4f9a-967b-9e8c9f5adb14","Type":"ContainerDied","Data":"c928326c4e46effe7b5e63accb50334ba9cac7416e75850997774d32ddfe91c3"} Jan 06 14:05:45 crc kubenswrapper[4869]: I0106 14:05:45.916552 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.028464 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config\") pod \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063177 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca\") pod \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063269 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpl4f\" (UniqueName: \"kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f\") pod \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert\") pod \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\" (UID: \"3be609b8-114c-4f9a-967b-9e8c9f5adb14\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063958 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca" (OuterVolumeSpecName: "client-ca") pod "3be609b8-114c-4f9a-967b-9e8c9f5adb14" (UID: "3be609b8-114c-4f9a-967b-9e8c9f5adb14"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.063998 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config" (OuterVolumeSpecName: "config") pod "3be609b8-114c-4f9a-967b-9e8c9f5adb14" (UID: "3be609b8-114c-4f9a-967b-9e8c9f5adb14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.068464 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3be609b8-114c-4f9a-967b-9e8c9f5adb14" (UID: "3be609b8-114c-4f9a-967b-9e8c9f5adb14"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.068894 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f" (OuterVolumeSpecName: "kube-api-access-dpl4f") pod "3be609b8-114c-4f9a-967b-9e8c9f5adb14" (UID: "3be609b8-114c-4f9a-967b-9e8c9f5adb14"). InnerVolumeSpecName "kube-api-access-dpl4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.164955 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config\") pod \"686e34aa-cd19-4741-be83-7ea0ee736c40\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165013 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert\") pod \"686e34aa-cd19-4741-be83-7ea0ee736c40\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca\") pod \"686e34aa-cd19-4741-be83-7ea0ee736c40\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles\") pod \"686e34aa-cd19-4741-be83-7ea0ee736c40\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165169 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p4jq\" (UniqueName: \"kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq\") pod \"686e34aa-cd19-4741-be83-7ea0ee736c40\" (UID: \"686e34aa-cd19-4741-be83-7ea0ee736c40\") " Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165424 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165437 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3be609b8-114c-4f9a-967b-9e8c9f5adb14-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165446 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpl4f\" (UniqueName: \"kubernetes.io/projected/3be609b8-114c-4f9a-967b-9e8c9f5adb14-kube-api-access-dpl4f\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165457 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3be609b8-114c-4f9a-967b-9e8c9f5adb14-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "686e34aa-cd19-4741-be83-7ea0ee736c40" (UID: "686e34aa-cd19-4741-be83-7ea0ee736c40"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.165989 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca" (OuterVolumeSpecName: "client-ca") pod "686e34aa-cd19-4741-be83-7ea0ee736c40" (UID: "686e34aa-cd19-4741-be83-7ea0ee736c40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.166065 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config" (OuterVolumeSpecName: "config") pod "686e34aa-cd19-4741-be83-7ea0ee736c40" (UID: "686e34aa-cd19-4741-be83-7ea0ee736c40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.167909 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "686e34aa-cd19-4741-be83-7ea0ee736c40" (UID: "686e34aa-cd19-4741-be83-7ea0ee736c40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.169266 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq" (OuterVolumeSpecName: "kube-api-access-9p4jq") pod "686e34aa-cd19-4741-be83-7ea0ee736c40" (UID: "686e34aa-cd19-4741-be83-7ea0ee736c40"). InnerVolumeSpecName "kube-api-access-9p4jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.266896 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.266931 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p4jq\" (UniqueName: \"kubernetes.io/projected/686e34aa-cd19-4741-be83-7ea0ee736c40-kube-api-access-9p4jq\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.266945 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.266955 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/686e34aa-cd19-4741-be83-7ea0ee736c40-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.266963 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/686e34aa-cd19-4741-be83-7ea0ee736c40-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.491123 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" event={"ID":"3be609b8-114c-4f9a-967b-9e8c9f5adb14","Type":"ContainerDied","Data":"568d7bfda47ba0a8a95126303c31ceae7304bcf70357db4ef7f41278ca641ad5"} Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.491169 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.491204 4869 scope.go:117] "RemoveContainer" containerID="c928326c4e46effe7b5e63accb50334ba9cac7416e75850997774d32ddfe91c3" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.492698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" event={"ID":"686e34aa-cd19-4741-be83-7ea0ee736c40","Type":"ContainerDied","Data":"7713eeea922a5fbc516b811fb367c527bbc3b0e3ab98022750ddf71ae6611015"} Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.492744 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-559465bd67-w26cn" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.518104 4869 scope.go:117] "RemoveContainer" containerID="eec171b855d5722f473a8cb6a84b857ff7af28da0bc0c6d43f5d1212bcaef94f" Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.538063 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.544528 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-559465bd67-w26cn"] Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.567425 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:46 crc kubenswrapper[4869]: I0106 14:05:46.578088 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cdfcd5df-g2mtn"] Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.142954 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:47 crc kubenswrapper[4869]: E0106 14:05:47.143278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" containerName="route-controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.143292 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" containerName="route-controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: E0106 14:05:47.143301 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686e34aa-cd19-4741-be83-7ea0ee736c40" containerName="controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.143308 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="686e34aa-cd19-4741-be83-7ea0ee736c40" containerName="controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.143393 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" containerName="route-controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.143412 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="686e34aa-cd19-4741-be83-7ea0ee736c40" containerName="controller-manager" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.143864 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.146390 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.146645 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.147284 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.148992 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.149166 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.149307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.149700 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150585 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150612 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150640 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150655 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150595 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.150961 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.154120 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.156592 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.159878 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.167755 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.280742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.280893 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfcnb\" (UniqueName: \"kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.280930 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.280966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.281023 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.281062 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.281105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz48\" (UniqueName: \"kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.281132 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.281162 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382143 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfcnb\" (UniqueName: \"kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382304 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382342 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382406 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkz48\" (UniqueName: \"kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.382428 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.384068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.384458 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.384481 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.384517 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.385083 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.394568 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.394913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.399254 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkz48\" (UniqueName: \"kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48\") pod \"route-controller-manager-76946b564d-mr4cb\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.404870 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfcnb\" (UniqueName: \"kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb\") pod \"controller-manager-d6f97d578-clh2w\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.467441 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.475995 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.711257 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be609b8-114c-4f9a-967b-9e8c9f5adb14" path="/var/lib/kubelet/pods/3be609b8-114c-4f9a-967b-9e8c9f5adb14/volumes" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.712365 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686e34aa-cd19-4741-be83-7ea0ee736c40" path="/var/lib/kubelet/pods/686e34aa-cd19-4741-be83-7ea0ee736c40/volumes" Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.863748 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:47 crc kubenswrapper[4869]: I0106 14:05:47.921525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:05:47 crc kubenswrapper[4869]: W0106 14:05:47.927599 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cc82658_f17b_4d38_b47e_078421f97005.slice/crio-b0df986eb8455b36415be222f6b87db0f191c19da3b1beaf088cc1a3b3f4b0ce WatchSource:0}: Error finding container b0df986eb8455b36415be222f6b87db0f191c19da3b1beaf088cc1a3b3f4b0ce: Status 404 returned error can't find the container with id b0df986eb8455b36415be222f6b87db0f191c19da3b1beaf088cc1a3b3f4b0ce Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.510719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" event={"ID":"9ef6e0b3-091e-417d-b24f-5b143680f3a9","Type":"ContainerStarted","Data":"e96976d0564f7e12746db21ce59d538983491b8bc982d71a378024a71915caf4"} Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.511036 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" event={"ID":"9ef6e0b3-091e-417d-b24f-5b143680f3a9","Type":"ContainerStarted","Data":"8803ce780de48a16908232c2e8e458cd7f905659e4a24b7f54745ebde9c72ce2"} Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.511055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.512698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" event={"ID":"4cc82658-f17b-4d38-b47e-078421f97005","Type":"ContainerStarted","Data":"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3"} Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.512747 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" event={"ID":"4cc82658-f17b-4d38-b47e-078421f97005","Type":"ContainerStarted","Data":"b0df986eb8455b36415be222f6b87db0f191c19da3b1beaf088cc1a3b3f4b0ce"} Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.513131 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.518692 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.530926 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" podStartSLOduration=3.530902951 podStartE2EDuration="3.530902951s" podCreationTimestamp="2026-01-06 14:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:05:48.527723433 +0000 UTC m=+367.067411097" watchObservedRunningTime="2026-01-06 14:05:48.530902951 +0000 UTC m=+367.070590615" Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.549737 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" podStartSLOduration=3.5497170049999998 podStartE2EDuration="3.549717005s" podCreationTimestamp="2026-01-06 14:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:05:48.547082061 +0000 UTC m=+367.086769735" watchObservedRunningTime="2026-01-06 14:05:48.549717005 +0000 UTC m=+367.089404669" Jan 06 14:05:48 crc kubenswrapper[4869]: I0106 14:05:48.620753 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.315427 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vjm9"] Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.316955 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.320082 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.328036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vjm9"] Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.488155 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-utilities\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.488285 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxrsl\" (UniqueName: \"kubernetes.io/projected/83b564ad-004b-445b-8814-d4be0e085891-kube-api-access-lxrsl\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.488326 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-catalog-content\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.512076 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jph6b"] Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.515088 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.518469 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.530234 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jph6b"] Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.589400 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxrsl\" (UniqueName: \"kubernetes.io/projected/83b564ad-004b-445b-8814-d4be0e085891-kube-api-access-lxrsl\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.589499 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-catalog-content\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.589565 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-utilities\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.590393 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-utilities\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.590431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b564ad-004b-445b-8814-d4be0e085891-catalog-content\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.609930 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxrsl\" (UniqueName: \"kubernetes.io/projected/83b564ad-004b-445b-8814-d4be0e085891-kube-api-access-lxrsl\") pod \"certified-operators-8vjm9\" (UID: \"83b564ad-004b-445b-8814-d4be0e085891\") " pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.637267 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.690545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-utilities\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.690614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-catalog-content\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.690659 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr85w\" (UniqueName: \"kubernetes.io/projected/1f2e5a2b-84ef-4926-9008-dec653a3c947-kube-api-access-vr85w\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.791450 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-utilities\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.791850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-catalog-content\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.791884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr85w\" (UniqueName: \"kubernetes.io/projected/1f2e5a2b-84ef-4926-9008-dec653a3c947-kube-api-access-vr85w\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.792234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-utilities\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.792590 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2e5a2b-84ef-4926-9008-dec653a3c947-catalog-content\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.811946 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr85w\" (UniqueName: \"kubernetes.io/projected/1f2e5a2b-84ef-4926-9008-dec653a3c947-kube-api-access-vr85w\") pod \"community-operators-jph6b\" (UID: \"1f2e5a2b-84ef-4926-9008-dec653a3c947\") " pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:55 crc kubenswrapper[4869]: I0106 14:05:55.838532 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.086222 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vjm9"] Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.324103 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jph6b"] Jan 06 14:05:56 crc kubenswrapper[4869]: W0106 14:05:56.330184 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2e5a2b_84ef_4926_9008_dec653a3c947.slice/crio-57d11ec2fab0cc2876c5b665ab08b55db8f51146f6cb70748bad7a8b53b16ad2 WatchSource:0}: Error finding container 57d11ec2fab0cc2876c5b665ab08b55db8f51146f6cb70748bad7a8b53b16ad2: Status 404 returned error can't find the container with id 57d11ec2fab0cc2876c5b665ab08b55db8f51146f6cb70748bad7a8b53b16ad2 Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.567524 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f2e5a2b-84ef-4926-9008-dec653a3c947" containerID="ff17a4a27f1a4bb6f13d7e15851ab8a1bba67f8777e1f2ce057153d965feb5a4" exitCode=0 Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.567784 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jph6b" event={"ID":"1f2e5a2b-84ef-4926-9008-dec653a3c947","Type":"ContainerDied","Data":"ff17a4a27f1a4bb6f13d7e15851ab8a1bba67f8777e1f2ce057153d965feb5a4"} Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.567823 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jph6b" event={"ID":"1f2e5a2b-84ef-4926-9008-dec653a3c947","Type":"ContainerStarted","Data":"57d11ec2fab0cc2876c5b665ab08b55db8f51146f6cb70748bad7a8b53b16ad2"} Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.570187 4869 generic.go:334] "Generic (PLEG): container finished" podID="83b564ad-004b-445b-8814-d4be0e085891" containerID="9ae986d3faa6b2be68d67569602c679c3292f4ed6eb01dba29525def70b13460" exitCode=0 Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.570233 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjm9" event={"ID":"83b564ad-004b-445b-8814-d4be0e085891","Type":"ContainerDied","Data":"9ae986d3faa6b2be68d67569602c679c3292f4ed6eb01dba29525def70b13460"} Jan 06 14:05:56 crc kubenswrapper[4869]: I0106 14:05:56.570261 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjm9" event={"ID":"83b564ad-004b-445b-8814-d4be0e085891","Type":"ContainerStarted","Data":"ef026048c5a7c9347eaa6fc19726f43fa6f5bde42378800eb9c319ff762649dc"} Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.581010 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jph6b" event={"ID":"1f2e5a2b-84ef-4926-9008-dec653a3c947","Type":"ContainerStarted","Data":"549d2bdf5ea006411fd905526a5133aad7b1eaf14e8b7c019008c42f0389b243"} Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.585133 4869 generic.go:334] "Generic (PLEG): container finished" podID="83b564ad-004b-445b-8814-d4be0e085891" containerID="67e02efbf9dca4f73fa9d837b576229a9bd1354b41a0cf236ec81a98c9f32167" exitCode=0 Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.585182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjm9" event={"ID":"83b564ad-004b-445b-8814-d4be0e085891","Type":"ContainerDied","Data":"67e02efbf9dca4f73fa9d837b576229a9bd1354b41a0cf236ec81a98c9f32167"} Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.913069 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-99k25"] Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.914282 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.917980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 06 14:05:57 crc kubenswrapper[4869]: I0106 14:05:57.927632 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99k25"] Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.036099 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-utilities\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.036169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgrh\" (UniqueName: \"kubernetes.io/projected/d025cef5-8e65-4270-afc4-838c1a166ad6-kube-api-access-2fgrh\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.036196 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-catalog-content\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.064489 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.064760 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" podUID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" containerName="controller-manager" containerID="cri-o://e96976d0564f7e12746db21ce59d538983491b8bc982d71a378024a71915caf4" gracePeriod=30 Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.111569 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kb6kw"] Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.113021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.115848 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.120835 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb6kw"] Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.137186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-utilities\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.137252 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fgrh\" (UniqueName: \"kubernetes.io/projected/d025cef5-8e65-4270-afc4-838c1a166ad6-kube-api-access-2fgrh\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.137275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-catalog-content\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.137785 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-utilities\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.137875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d025cef5-8e65-4270-afc4-838c1a166ad6-catalog-content\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.157762 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fgrh\" (UniqueName: \"kubernetes.io/projected/d025cef5-8e65-4270-afc4-838c1a166ad6-kube-api-access-2fgrh\") pod \"redhat-marketplace-99k25\" (UID: \"d025cef5-8e65-4270-afc4-838c1a166ad6\") " pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.228880 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.243465 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvnkn\" (UniqueName: \"kubernetes.io/projected/4e4dd706-de57-4440-8881-d5f18ea2506e-kube-api-access-qvnkn\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.243800 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-catalog-content\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.243834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-utilities\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.345051 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-catalog-content\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.345114 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-utilities\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.345178 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvnkn\" (UniqueName: \"kubernetes.io/projected/4e4dd706-de57-4440-8881-d5f18ea2506e-kube-api-access-qvnkn\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.345705 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-catalog-content\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.346032 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4dd706-de57-4440-8881-d5f18ea2506e-utilities\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.368919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvnkn\" (UniqueName: \"kubernetes.io/projected/4e4dd706-de57-4440-8881-d5f18ea2506e-kube-api-access-qvnkn\") pod \"redhat-operators-kb6kw\" (UID: \"4e4dd706-de57-4440-8881-d5f18ea2506e\") " pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.511072 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.595605 4869 generic.go:334] "Generic (PLEG): container finished" podID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" containerID="e96976d0564f7e12746db21ce59d538983491b8bc982d71a378024a71915caf4" exitCode=0 Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.595687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" event={"ID":"9ef6e0b3-091e-417d-b24f-5b143680f3a9","Type":"ContainerDied","Data":"e96976d0564f7e12746db21ce59d538983491b8bc982d71a378024a71915caf4"} Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.597563 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f2e5a2b-84ef-4926-9008-dec653a3c947" containerID="549d2bdf5ea006411fd905526a5133aad7b1eaf14e8b7c019008c42f0389b243" exitCode=0 Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.597608 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jph6b" event={"ID":"1f2e5a2b-84ef-4926-9008-dec653a3c947","Type":"ContainerDied","Data":"549d2bdf5ea006411fd905526a5133aad7b1eaf14e8b7c019008c42f0389b243"} Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.604389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjm9" event={"ID":"83b564ad-004b-445b-8814-d4be0e085891","Type":"ContainerStarted","Data":"9c3e3b08a57a5e8f9c3429a3d02efe99e4fbc0968f6e1a54babccf7de05cba0b"} Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.643693 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vjm9" podStartSLOduration=1.932053861 podStartE2EDuration="3.643653216s" podCreationTimestamp="2026-01-06 14:05:55 +0000 UTC" firstStartedPulling="2026-01-06 14:05:56.571690458 +0000 UTC m=+375.111378142" lastFinishedPulling="2026-01-06 14:05:58.283289833 +0000 UTC m=+376.822977497" observedRunningTime="2026-01-06 14:05:58.637117018 +0000 UTC m=+377.176804682" watchObservedRunningTime="2026-01-06 14:05:58.643653216 +0000 UTC m=+377.183340880" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.648911 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.729442 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99k25"] Jan 06 14:05:58 crc kubenswrapper[4869]: W0106 14:05:58.734792 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd025cef5_8e65_4270_afc4_838c1a166ad6.slice/crio-b3ee7f72df55000763af4f9f3ccedcbd11d41870ce6e271e3525048bdbc7564a WatchSource:0}: Error finding container b3ee7f72df55000763af4f9f3ccedcbd11d41870ce6e271e3525048bdbc7564a: Status 404 returned error can't find the container with id b3ee7f72df55000763af4f9f3ccedcbd11d41870ce6e271e3525048bdbc7564a Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.749699 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfcnb\" (UniqueName: \"kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb\") pod \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.749801 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert\") pod \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.749828 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca\") pod \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.749883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles\") pod \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.749925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config\") pod \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\" (UID: \"9ef6e0b3-091e-417d-b24f-5b143680f3a9\") " Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.750718 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9ef6e0b3-091e-417d-b24f-5b143680f3a9" (UID: "9ef6e0b3-091e-417d-b24f-5b143680f3a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.751283 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config" (OuterVolumeSpecName: "config") pod "9ef6e0b3-091e-417d-b24f-5b143680f3a9" (UID: "9ef6e0b3-091e-417d-b24f-5b143680f3a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.752051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9ef6e0b3-091e-417d-b24f-5b143680f3a9" (UID: "9ef6e0b3-091e-417d-b24f-5b143680f3a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.754530 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9ef6e0b3-091e-417d-b24f-5b143680f3a9" (UID: "9ef6e0b3-091e-417d-b24f-5b143680f3a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.754675 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb" (OuterVolumeSpecName: "kube-api-access-nfcnb") pod "9ef6e0b3-091e-417d-b24f-5b143680f3a9" (UID: "9ef6e0b3-091e-417d-b24f-5b143680f3a9"). InnerVolumeSpecName "kube-api-access-nfcnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.851554 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef6e0b3-091e-417d-b24f-5b143680f3a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.851599 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.851642 4869 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.851657 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef6e0b3-091e-417d-b24f-5b143680f3a9-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.851702 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfcnb\" (UniqueName: \"kubernetes.io/projected/9ef6e0b3-091e-417d-b24f-5b143680f3a9-kube-api-access-nfcnb\") on node \"crc\" DevicePath \"\"" Jan 06 14:05:58 crc kubenswrapper[4869]: I0106 14:05:58.948816 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb6kw"] Jan 06 14:05:58 crc kubenswrapper[4869]: W0106 14:05:58.956726 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e4dd706_de57_4440_8881_d5f18ea2506e.slice/crio-be8f308dfdd6d476c6248ed06ac07a5b29e47995165221897e01e8274de9c0e2 WatchSource:0}: Error finding container be8f308dfdd6d476c6248ed06ac07a5b29e47995165221897e01e8274de9c0e2: Status 404 returned error can't find the container with id be8f308dfdd6d476c6248ed06ac07a5b29e47995165221897e01e8274de9c0e2 Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.146704 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f4b748596-jmjql"] Jan 06 14:05:59 crc kubenswrapper[4869]: E0106 14:05:59.146926 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" containerName="controller-manager" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.146937 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" containerName="controller-manager" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.147034 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" containerName="controller-manager" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.148347 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.164659 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4b748596-jmjql"] Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.256414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-config\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.256496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-proxy-ca-bundles\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.256538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc706e13-7ef6-467f-844d-83c4c50f3c34-serving-cert\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.256571 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzrl\" (UniqueName: \"kubernetes.io/projected/bc706e13-7ef6-467f-844d-83c4c50f3c34-kube-api-access-2dzrl\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.256600 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-client-ca\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.358120 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-proxy-ca-bundles\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.358178 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc706e13-7ef6-467f-844d-83c4c50f3c34-serving-cert\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.358203 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzrl\" (UniqueName: \"kubernetes.io/projected/bc706e13-7ef6-467f-844d-83c4c50f3c34-kube-api-access-2dzrl\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.358225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-client-ca\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.358267 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-config\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.359614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-client-ca\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.359769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-config\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.360429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc706e13-7ef6-467f-844d-83c4c50f3c34-proxy-ca-bundles\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.363447 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc706e13-7ef6-467f-844d-83c4c50f3c34-serving-cert\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.387600 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzrl\" (UniqueName: \"kubernetes.io/projected/bc706e13-7ef6-467f-844d-83c4c50f3c34-kube-api-access-2dzrl\") pod \"controller-manager-6f4b748596-jmjql\" (UID: \"bc706e13-7ef6-467f-844d-83c4c50f3c34\") " pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.523829 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.627070 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jph6b" event={"ID":"1f2e5a2b-84ef-4926-9008-dec653a3c947","Type":"ContainerStarted","Data":"7a1b5bb70a748dfb598ac23a827c811a107f1f6341479afd144434a2baebaaa4"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.629371 4869 generic.go:334] "Generic (PLEG): container finished" podID="d025cef5-8e65-4270-afc4-838c1a166ad6" containerID="a0dea8f35e67227d1e8736ad3e1f617e0f3c8c52acf7c5812449e8dfbb0a8889" exitCode=0 Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.629455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99k25" event={"ID":"d025cef5-8e65-4270-afc4-838c1a166ad6","Type":"ContainerDied","Data":"a0dea8f35e67227d1e8736ad3e1f617e0f3c8c52acf7c5812449e8dfbb0a8889"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.629516 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99k25" event={"ID":"d025cef5-8e65-4270-afc4-838c1a166ad6","Type":"ContainerStarted","Data":"b3ee7f72df55000763af4f9f3ccedcbd11d41870ce6e271e3525048bdbc7564a"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.631511 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e4dd706-de57-4440-8881-d5f18ea2506e" containerID="e2cee9b99cef48f61bf8537a3b9b729cc06cccf2a24fdabb6eb20fb7b4244c35" exitCode=0 Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.631580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6kw" event={"ID":"4e4dd706-de57-4440-8881-d5f18ea2506e","Type":"ContainerDied","Data":"e2cee9b99cef48f61bf8537a3b9b729cc06cccf2a24fdabb6eb20fb7b4244c35"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.631615 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6kw" event={"ID":"4e4dd706-de57-4440-8881-d5f18ea2506e","Type":"ContainerStarted","Data":"be8f308dfdd6d476c6248ed06ac07a5b29e47995165221897e01e8274de9c0e2"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.634356 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.634776 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-clh2w" event={"ID":"9ef6e0b3-091e-417d-b24f-5b143680f3a9","Type":"ContainerDied","Data":"8803ce780de48a16908232c2e8e458cd7f905659e4a24b7f54745ebde9c72ce2"} Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.634853 4869 scope.go:117] "RemoveContainer" containerID="e96976d0564f7e12746db21ce59d538983491b8bc982d71a378024a71915caf4" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.665435 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jph6b" podStartSLOduration=2.151390904 podStartE2EDuration="4.665408121s" podCreationTimestamp="2026-01-06 14:05:55 +0000 UTC" firstStartedPulling="2026-01-06 14:05:56.569767021 +0000 UTC m=+375.109454705" lastFinishedPulling="2026-01-06 14:05:59.083784258 +0000 UTC m=+377.623471922" observedRunningTime="2026-01-06 14:05:59.654489367 +0000 UTC m=+378.194177021" watchObservedRunningTime="2026-01-06 14:05:59.665408121 +0000 UTC m=+378.205095785" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.689717 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.696512 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-clh2w"] Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.713720 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef6e0b3-091e-417d-b24f-5b143680f3a9" path="/var/lib/kubelet/pods/9ef6e0b3-091e-417d-b24f-5b143680f3a9/volumes" Jan 06 14:05:59 crc kubenswrapper[4869]: I0106 14:05:59.950843 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f4b748596-jmjql"] Jan 06 14:05:59 crc kubenswrapper[4869]: W0106 14:05:59.957841 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc706e13_7ef6_467f_844d_83c4c50f3c34.slice/crio-1f7c10d5277541d87643f7c952a454c2a5771112dcdff7fff9f33de96fdbdb10 WatchSource:0}: Error finding container 1f7c10d5277541d87643f7c952a454c2a5771112dcdff7fff9f33de96fdbdb10: Status 404 returned error can't find the container with id 1f7c10d5277541d87643f7c952a454c2a5771112dcdff7fff9f33de96fdbdb10 Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.641301 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" event={"ID":"bc706e13-7ef6-467f-844d-83c4c50f3c34","Type":"ContainerStarted","Data":"ee7bc61fcbbb6803da282e47097dfc8dff91b63b0258e1b53261e335f87f4b54"} Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.642704 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.642749 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" event={"ID":"bc706e13-7ef6-467f-844d-83c4c50f3c34","Type":"ContainerStarted","Data":"1f7c10d5277541d87643f7c952a454c2a5771112dcdff7fff9f33de96fdbdb10"} Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.644621 4869 generic.go:334] "Generic (PLEG): container finished" podID="d025cef5-8e65-4270-afc4-838c1a166ad6" containerID="78e1f4bda3fb60ec8c3fb7a251f7692ce4f59e729e4a4218a271cad4c2765ea9" exitCode=0 Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.646262 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99k25" event={"ID":"d025cef5-8e65-4270-afc4-838c1a166ad6","Type":"ContainerDied","Data":"78e1f4bda3fb60ec8c3fb7a251f7692ce4f59e729e4a4218a271cad4c2765ea9"} Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.652001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" Jan 06 14:06:00 crc kubenswrapper[4869]: I0106 14:06:00.664172 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6f4b748596-jmjql" podStartSLOduration=2.6641537790000003 podStartE2EDuration="2.664153779s" podCreationTimestamp="2026-01-06 14:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:06:00.660147542 +0000 UTC m=+379.199835216" watchObservedRunningTime="2026-01-06 14:06:00.664153779 +0000 UTC m=+379.203841443" Jan 06 14:06:01 crc kubenswrapper[4869]: I0106 14:06:01.651595 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e4dd706-de57-4440-8881-d5f18ea2506e" containerID="8595de40f58e1e88612f2496675be51a3321804585e73a9a395c7e4f1987c2e9" exitCode=0 Jan 06 14:06:01 crc kubenswrapper[4869]: I0106 14:06:01.651649 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6kw" event={"ID":"4e4dd706-de57-4440-8881-d5f18ea2506e","Type":"ContainerDied","Data":"8595de40f58e1e88612f2496675be51a3321804585e73a9a395c7e4f1987c2e9"} Jan 06 14:06:01 crc kubenswrapper[4869]: I0106 14:06:01.655068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99k25" event={"ID":"d025cef5-8e65-4270-afc4-838c1a166ad6","Type":"ContainerStarted","Data":"2e4df655c3cb12985daf2876946f8c1ecaf43254d6d635ab755257ea94cdddc5"} Jan 06 14:06:01 crc kubenswrapper[4869]: I0106 14:06:01.697585 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-99k25" podStartSLOduration=3.1528017249999998 podStartE2EDuration="4.697565916s" podCreationTimestamp="2026-01-06 14:05:57 +0000 UTC" firstStartedPulling="2026-01-06 14:05:59.633338855 +0000 UTC m=+378.173026519" lastFinishedPulling="2026-01-06 14:06:01.178103046 +0000 UTC m=+379.717790710" observedRunningTime="2026-01-06 14:06:01.697485734 +0000 UTC m=+380.237173418" watchObservedRunningTime="2026-01-06 14:06:01.697565916 +0000 UTC m=+380.237253580" Jan 06 14:06:02 crc kubenswrapper[4869]: I0106 14:06:02.666991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb6kw" event={"ID":"4e4dd706-de57-4440-8881-d5f18ea2506e","Type":"ContainerStarted","Data":"a16cf3f2786101f3e9ffb73f389d56f8864bb4aa10f479c742aa60e451094f6b"} Jan 06 14:06:03 crc kubenswrapper[4869]: I0106 14:06:03.622488 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:06:03 crc kubenswrapper[4869]: I0106 14:06:03.622565 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.638043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.638408 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.680477 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.710234 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kb6kw" podStartSLOduration=5.260326032 podStartE2EDuration="7.710215938s" podCreationTimestamp="2026-01-06 14:05:58 +0000 UTC" firstStartedPulling="2026-01-06 14:05:59.632740081 +0000 UTC m=+378.172427745" lastFinishedPulling="2026-01-06 14:06:02.082629987 +0000 UTC m=+380.622317651" observedRunningTime="2026-01-06 14:06:02.691366275 +0000 UTC m=+381.231053939" watchObservedRunningTime="2026-01-06 14:06:05.710215938 +0000 UTC m=+384.249903602" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.740912 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vjm9" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.839092 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.839189 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:06:05 crc kubenswrapper[4869]: I0106 14:06:05.891050 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:06:06 crc kubenswrapper[4869]: I0106 14:06:06.732583 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jph6b" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.229862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.230677 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.292958 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.511761 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.511834 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.588545 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.754930 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kb6kw" Jan 06 14:06:08 crc kubenswrapper[4869]: I0106 14:06:08.756110 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-99k25" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.093331 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.094084 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" podUID="4cc82658-f17b-4d38-b47e-078421f97005" containerName="route-controller-manager" containerID="cri-o://f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3" gracePeriod=30 Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.549083 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.657110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca\") pod \"4cc82658-f17b-4d38-b47e-078421f97005\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.657211 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkz48\" (UniqueName: \"kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48\") pod \"4cc82658-f17b-4d38-b47e-078421f97005\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.657242 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config\") pod \"4cc82658-f17b-4d38-b47e-078421f97005\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.657325 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert\") pod \"4cc82658-f17b-4d38-b47e-078421f97005\" (UID: \"4cc82658-f17b-4d38-b47e-078421f97005\") " Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.657925 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca" (OuterVolumeSpecName: "client-ca") pod "4cc82658-f17b-4d38-b47e-078421f97005" (UID: "4cc82658-f17b-4d38-b47e-078421f97005"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.658276 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config" (OuterVolumeSpecName: "config") pod "4cc82658-f17b-4d38-b47e-078421f97005" (UID: "4cc82658-f17b-4d38-b47e-078421f97005"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.662563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48" (OuterVolumeSpecName: "kube-api-access-lkz48") pod "4cc82658-f17b-4d38-b47e-078421f97005" (UID: "4cc82658-f17b-4d38-b47e-078421f97005"). InnerVolumeSpecName "kube-api-access-lkz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.662985 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4cc82658-f17b-4d38-b47e-078421f97005" (UID: "4cc82658-f17b-4d38-b47e-078421f97005"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.757858 4869 generic.go:334] "Generic (PLEG): container finished" podID="4cc82658-f17b-4d38-b47e-078421f97005" containerID="f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3" exitCode=0 Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.757902 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" event={"ID":"4cc82658-f17b-4d38-b47e-078421f97005","Type":"ContainerDied","Data":"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3"} Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.757926 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.757953 4869 scope.go:117] "RemoveContainer" containerID="f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.757937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb" event={"ID":"4cc82658-f17b-4d38-b47e-078421f97005","Type":"ContainerDied","Data":"b0df986eb8455b36415be222f6b87db0f191c19da3b1beaf088cc1a3b3f4b0ce"} Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.759606 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.759638 4869 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cc82658-f17b-4d38-b47e-078421f97005-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.759691 4869 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4cc82658-f17b-4d38-b47e-078421f97005-client-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.759718 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkz48\" (UniqueName: \"kubernetes.io/projected/4cc82658-f17b-4d38-b47e-078421f97005-kube-api-access-lkz48\") on node \"crc\" DevicePath \"\"" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.784960 4869 scope.go:117] "RemoveContainer" containerID="f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3" Jan 06 14:06:18 crc kubenswrapper[4869]: E0106 14:06:18.785819 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3\": container with ID starting with f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3 not found: ID does not exist" containerID="f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.785875 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3"} err="failed to get container status \"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3\": rpc error: code = NotFound desc = could not find container \"f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3\": container with ID starting with f0b43d8a486887eb651400498d52437bfdbade41f1ce1ed269247d7d324114d3 not found: ID does not exist" Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.789121 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:06:18 crc kubenswrapper[4869]: I0106 14:06:18.793140 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-mr4cb"] Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.159512 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql"] Jan 06 14:06:19 crc kubenswrapper[4869]: E0106 14:06:19.160591 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cc82658-f17b-4d38-b47e-078421f97005" containerName="route-controller-manager" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.160973 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cc82658-f17b-4d38-b47e-078421f97005" containerName="route-controller-manager" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.161150 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cc82658-f17b-4d38-b47e-078421f97005" containerName="route-controller-manager" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.161637 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.165186 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.165624 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.165750 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.166128 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.166406 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.168910 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.183364 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql"] Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.264500 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-config\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.264604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aeb5c17-05a7-4adb-bf85-e192025796da-serving-cert\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.264650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k9pn\" (UniqueName: \"kubernetes.io/projected/0aeb5c17-05a7-4adb-bf85-e192025796da-kube-api-access-2k9pn\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.264753 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-client-ca\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.366309 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-config\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.366374 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aeb5c17-05a7-4adb-bf85-e192025796da-serving-cert\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.366402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k9pn\" (UniqueName: \"kubernetes.io/projected/0aeb5c17-05a7-4adb-bf85-e192025796da-kube-api-access-2k9pn\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.366443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-client-ca\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.367465 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-client-ca\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.367636 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aeb5c17-05a7-4adb-bf85-e192025796da-config\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.372155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aeb5c17-05a7-4adb-bf85-e192025796da-serving-cert\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.389243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k9pn\" (UniqueName: \"kubernetes.io/projected/0aeb5c17-05a7-4adb-bf85-e192025796da-kube-api-access-2k9pn\") pod \"route-controller-manager-7ff8f9bc68-v86ql\" (UID: \"0aeb5c17-05a7-4adb-bf85-e192025796da\") " pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.477436 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.731613 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cc82658-f17b-4d38-b47e-078421f97005" path="/var/lib/kubelet/pods/4cc82658-f17b-4d38-b47e-078421f97005/volumes" Jan 06 14:06:19 crc kubenswrapper[4869]: I0106 14:06:19.925480 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql"] Jan 06 14:06:20 crc kubenswrapper[4869]: I0106 14:06:20.773653 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" event={"ID":"0aeb5c17-05a7-4adb-bf85-e192025796da","Type":"ContainerStarted","Data":"52ac6b4807adee4f1e5bf1dcc0013b1ae5cbcfa2faac33c06f2e1afba9f64192"} Jan 06 14:06:20 crc kubenswrapper[4869]: I0106 14:06:20.774052 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" event={"ID":"0aeb5c17-05a7-4adb-bf85-e192025796da","Type":"ContainerStarted","Data":"16c52c003f89714155927219cf0230d7341b1b7f93ac7b91903719b2b1fc05cf"} Jan 06 14:06:20 crc kubenswrapper[4869]: I0106 14:06:20.774218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:20 crc kubenswrapper[4869]: I0106 14:06:20.779835 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" Jan 06 14:06:20 crc kubenswrapper[4869]: I0106 14:06:20.803369 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ff8f9bc68-v86ql" podStartSLOduration=2.803348065 podStartE2EDuration="2.803348065s" podCreationTimestamp="2026-01-06 14:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:06:20.796806814 +0000 UTC m=+399.336494488" watchObservedRunningTime="2026-01-06 14:06:20.803348065 +0000 UTC m=+399.343035729" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.238298 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mfvt6"] Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.239557 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.252823 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mfvt6"] Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365260 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365344 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-certificates\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-tls\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhf4m\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-kube-api-access-bhf4m\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.365508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.366184 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-trusted-ca\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.366223 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-bound-sa-token\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.390507 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.467494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhf4m\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-kube-api-access-bhf4m\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.467581 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-trusted-ca\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.467611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-bound-sa-token\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.467646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.468030 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.468065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-certificates\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.468095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-tls\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.468892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.469653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-certificates\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.469821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-trusted-ca\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.474522 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-registry-tls\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.476231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.492569 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-bound-sa-token\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.492928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhf4m\" (UniqueName: \"kubernetes.io/projected/1bcabc96-0c48-46cc-817c-f1dedc15e4c7-kube-api-access-bhf4m\") pod \"image-registry-66df7c8f76-mfvt6\" (UID: \"1bcabc96-0c48-46cc-817c-f1dedc15e4c7\") " pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:25 crc kubenswrapper[4869]: I0106 14:06:25.554843 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:26 crc kubenswrapper[4869]: I0106 14:06:26.001477 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mfvt6"] Jan 06 14:06:26 crc kubenswrapper[4869]: W0106 14:06:26.013167 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bcabc96_0c48_46cc_817c_f1dedc15e4c7.slice/crio-fff9a1331cbbb3f5cc4f56bec44399d933589af52160a486c737a3bb3965b018 WatchSource:0}: Error finding container fff9a1331cbbb3f5cc4f56bec44399d933589af52160a486c737a3bb3965b018: Status 404 returned error can't find the container with id fff9a1331cbbb3f5cc4f56bec44399d933589af52160a486c737a3bb3965b018 Jan 06 14:06:26 crc kubenswrapper[4869]: I0106 14:06:26.807852 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" event={"ID":"1bcabc96-0c48-46cc-817c-f1dedc15e4c7","Type":"ContainerStarted","Data":"6753f804ad13a2f16c4ae5cb7e504c1e1d7c8386b8e7919d9e5ea22581b3b68c"} Jan 06 14:06:26 crc kubenswrapper[4869]: I0106 14:06:26.808143 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" event={"ID":"1bcabc96-0c48-46cc-817c-f1dedc15e4c7","Type":"ContainerStarted","Data":"fff9a1331cbbb3f5cc4f56bec44399d933589af52160a486c737a3bb3965b018"} Jan 06 14:06:26 crc kubenswrapper[4869]: I0106 14:06:26.808887 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:26 crc kubenswrapper[4869]: I0106 14:06:26.831831 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" podStartSLOduration=1.831809889 podStartE2EDuration="1.831809889s" podCreationTimestamp="2026-01-06 14:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:06:26.829308854 +0000 UTC m=+405.368996528" watchObservedRunningTime="2026-01-06 14:06:26.831809889 +0000 UTC m=+405.371497543" Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.623134 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.624080 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.624165 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.625091 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.625204 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce" gracePeriod=600 Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.857319 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce" exitCode=0 Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.857379 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce"} Jan 06 14:06:33 crc kubenswrapper[4869]: I0106 14:06:33.857499 4869 scope.go:117] "RemoveContainer" containerID="d93627c2e104a6c4205c0db6560f774807ec34c325277e9645743f234547b1b0" Jan 06 14:06:34 crc kubenswrapper[4869]: I0106 14:06:34.865162 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254"} Jan 06 14:06:45 crc kubenswrapper[4869]: I0106 14:06:45.560358 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mfvt6" Jan 06 14:06:45 crc kubenswrapper[4869]: I0106 14:06:45.619467 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:07:10 crc kubenswrapper[4869]: I0106 14:07:10.662606 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" podUID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" containerName="registry" containerID="cri-o://7bfa91b7521c7a7b7874422e4113029d9f8af933ddaa55db42a4950ab4bce8e6" gracePeriod=30 Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.096104 4869 generic.go:334] "Generic (PLEG): container finished" podID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" containerID="7bfa91b7521c7a7b7874422e4113029d9f8af933ddaa55db42a4950ab4bce8e6" exitCode=0 Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.096218 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" event={"ID":"15c48694-481d-4ac5-80cc-e153ca5fb1d1","Type":"ContainerDied","Data":"7bfa91b7521c7a7b7874422e4113029d9f8af933ddaa55db42a4950ab4bce8e6"} Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.566926 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.647975 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.648037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.656628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.666720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749260 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l8zb\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749351 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca\") pod \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\" (UID: \"15c48694-481d-4ac5-80cc-e153ca5fb1d1\") " Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749560 4869 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/15c48694-481d-4ac5-80cc-e153ca5fb1d1-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.749579 4869 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/15c48694-481d-4ac5-80cc-e153ca5fb1d1-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.750827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.750886 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.754742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.756979 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.757333 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb" (OuterVolumeSpecName: "kube-api-access-2l8zb") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "kube-api-access-2l8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.761430 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "15c48694-481d-4ac5-80cc-e153ca5fb1d1" (UID: "15c48694-481d-4ac5-80cc-e153ca5fb1d1"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.850429 4869 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.850496 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/15c48694-481d-4ac5-80cc-e153ca5fb1d1-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.850515 4869 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.850532 4869 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:11 crc kubenswrapper[4869]: I0106 14:07:11.850545 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l8zb\" (UniqueName: \"kubernetes.io/projected/15c48694-481d-4ac5-80cc-e153ca5fb1d1-kube-api-access-2l8zb\") on node \"crc\" DevicePath \"\"" Jan 06 14:07:12 crc kubenswrapper[4869]: I0106 14:07:12.104177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" event={"ID":"15c48694-481d-4ac5-80cc-e153ca5fb1d1","Type":"ContainerDied","Data":"235531a2fba0d2132229e03d0b19943743e3186477f936fc405ffdbd0441ca44"} Jan 06 14:07:12 crc kubenswrapper[4869]: I0106 14:07:12.104236 4869 scope.go:117] "RemoveContainer" containerID="7bfa91b7521c7a7b7874422e4113029d9f8af933ddaa55db42a4950ab4bce8e6" Jan 06 14:07:12 crc kubenswrapper[4869]: I0106 14:07:12.104366 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5jk5b" Jan 06 14:07:12 crc kubenswrapper[4869]: I0106 14:07:12.146171 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:07:12 crc kubenswrapper[4869]: I0106 14:07:12.153236 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5jk5b"] Jan 06 14:07:13 crc kubenswrapper[4869]: I0106 14:07:13.720276 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" path="/var/lib/kubelet/pods/15c48694-481d-4ac5-80cc-e153ca5fb1d1/volumes" Jan 06 14:08:33 crc kubenswrapper[4869]: I0106 14:08:33.622313 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:08:33 crc kubenswrapper[4869]: I0106 14:08:33.623126 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:09:03 crc kubenswrapper[4869]: I0106 14:09:03.622960 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:09:03 crc kubenswrapper[4869]: I0106 14:09:03.624255 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:09:33 crc kubenswrapper[4869]: I0106 14:09:33.622854 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:09:33 crc kubenswrapper[4869]: I0106 14:09:33.623465 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:09:33 crc kubenswrapper[4869]: I0106 14:09:33.623527 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:09:33 crc kubenswrapper[4869]: I0106 14:09:33.624442 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:09:33 crc kubenswrapper[4869]: I0106 14:09:33.624542 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254" gracePeriod=600 Jan 06 14:09:34 crc kubenswrapper[4869]: I0106 14:09:34.024407 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254" exitCode=0 Jan 06 14:09:34 crc kubenswrapper[4869]: I0106 14:09:34.024498 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254"} Jan 06 14:09:34 crc kubenswrapper[4869]: I0106 14:09:34.024861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee"} Jan 06 14:09:34 crc kubenswrapper[4869]: I0106 14:09:34.024889 4869 scope.go:117] "RemoveContainer" containerID="d5321810772a97756861c2d66ff49b793c0dab0865c23023c08245455a5b7fce" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.802817 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-622jv"] Jan 06 14:09:59 crc kubenswrapper[4869]: E0106 14:09:59.808096 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" containerName="registry" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.808143 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" containerName="registry" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.808372 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c48694-481d-4ac5-80cc-e153ca5fb1d1" containerName="registry" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.808956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.811427 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mpc28"] Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.812190 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mpc28" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.815184 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tgwhv" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.823755 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.823982 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.828243 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ht2rp"] Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.829071 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.829983 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kqbnw" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.832307 4869 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-x5cbg" Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.837854 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-622jv"] Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.861038 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ht2rp"] Jan 06 14:09:59 crc kubenswrapper[4869]: I0106 14:09:59.867868 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mpc28"] Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.010186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cv74\" (UniqueName: \"kubernetes.io/projected/1d08b8f3-af4f-4dff-876f-53fe177523f0-kube-api-access-8cv74\") pod \"cert-manager-858654f9db-mpc28\" (UID: \"1d08b8f3-af4f-4dff-876f-53fe177523f0\") " pod="cert-manager/cert-manager-858654f9db-mpc28" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.010258 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgmr\" (UniqueName: \"kubernetes.io/projected/34990624-9069-46d0-b8b5-03a8b37ef9ae-kube-api-access-rvgmr\") pod \"cert-manager-cainjector-cf98fcc89-622jv\" (UID: \"34990624-9069-46d0-b8b5-03a8b37ef9ae\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.010353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp8zr\" (UniqueName: \"kubernetes.io/projected/d899fe45-a78f-4d3b-af09-fc0eb97afd9a-kube-api-access-hp8zr\") pod \"cert-manager-webhook-687f57d79b-ht2rp\" (UID: \"d899fe45-a78f-4d3b-af09-fc0eb97afd9a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.111791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cv74\" (UniqueName: \"kubernetes.io/projected/1d08b8f3-af4f-4dff-876f-53fe177523f0-kube-api-access-8cv74\") pod \"cert-manager-858654f9db-mpc28\" (UID: \"1d08b8f3-af4f-4dff-876f-53fe177523f0\") " pod="cert-manager/cert-manager-858654f9db-mpc28" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.111843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgmr\" (UniqueName: \"kubernetes.io/projected/34990624-9069-46d0-b8b5-03a8b37ef9ae-kube-api-access-rvgmr\") pod \"cert-manager-cainjector-cf98fcc89-622jv\" (UID: \"34990624-9069-46d0-b8b5-03a8b37ef9ae\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.111882 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp8zr\" (UniqueName: \"kubernetes.io/projected/d899fe45-a78f-4d3b-af09-fc0eb97afd9a-kube-api-access-hp8zr\") pod \"cert-manager-webhook-687f57d79b-ht2rp\" (UID: \"d899fe45-a78f-4d3b-af09-fc0eb97afd9a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.140124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp8zr\" (UniqueName: \"kubernetes.io/projected/d899fe45-a78f-4d3b-af09-fc0eb97afd9a-kube-api-access-hp8zr\") pod \"cert-manager-webhook-687f57d79b-ht2rp\" (UID: \"d899fe45-a78f-4d3b-af09-fc0eb97afd9a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.141077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgmr\" (UniqueName: \"kubernetes.io/projected/34990624-9069-46d0-b8b5-03a8b37ef9ae-kube-api-access-rvgmr\") pod \"cert-manager-cainjector-cf98fcc89-622jv\" (UID: \"34990624-9069-46d0-b8b5-03a8b37ef9ae\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.141270 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cv74\" (UniqueName: \"kubernetes.io/projected/1d08b8f3-af4f-4dff-876f-53fe177523f0-kube-api-access-8cv74\") pod \"cert-manager-858654f9db-mpc28\" (UID: \"1d08b8f3-af4f-4dff-876f-53fe177523f0\") " pod="cert-manager/cert-manager-858654f9db-mpc28" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.144012 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mpc28" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.158342 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.406860 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mpc28"] Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.416650 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.430867 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.572231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-ht2rp"] Jan 06 14:10:00 crc kubenswrapper[4869]: I0106 14:10:00.619512 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-622jv"] Jan 06 14:10:00 crc kubenswrapper[4869]: W0106 14:10:00.627316 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34990624_9069_46d0_b8b5_03a8b37ef9ae.slice/crio-25e7056a98f70d70ad1a0361e050539f694cd7969a702a838488785bcd2ede24 WatchSource:0}: Error finding container 25e7056a98f70d70ad1a0361e050539f694cd7969a702a838488785bcd2ede24: Status 404 returned error can't find the container with id 25e7056a98f70d70ad1a0361e050539f694cd7969a702a838488785bcd2ede24 Jan 06 14:10:01 crc kubenswrapper[4869]: I0106 14:10:01.184731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" event={"ID":"d899fe45-a78f-4d3b-af09-fc0eb97afd9a","Type":"ContainerStarted","Data":"7e2ddfa34a7c8f328f620550334f06616b778cf14a7b4ed84e56772da4a48fc0"} Jan 06 14:10:01 crc kubenswrapper[4869]: I0106 14:10:01.186422 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" event={"ID":"34990624-9069-46d0-b8b5-03a8b37ef9ae","Type":"ContainerStarted","Data":"25e7056a98f70d70ad1a0361e050539f694cd7969a702a838488785bcd2ede24"} Jan 06 14:10:01 crc kubenswrapper[4869]: I0106 14:10:01.187330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mpc28" event={"ID":"1d08b8f3-af4f-4dff-876f-53fe177523f0","Type":"ContainerStarted","Data":"156454b153f18492d340c59c2f5c91a7b1bce5d7fdc55c3e6e029cb5a298bd4f"} Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.209959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mpc28" event={"ID":"1d08b8f3-af4f-4dff-876f-53fe177523f0","Type":"ContainerStarted","Data":"a671d85cdcfa670cfda514e40c87fdd41b41174a7674d80253a9164fa8391687"} Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.212071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" event={"ID":"d899fe45-a78f-4d3b-af09-fc0eb97afd9a","Type":"ContainerStarted","Data":"9f26d2a432516e8a75961a3caa564d3edce947a9f9e1164c46979c96ad535396"} Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.212168 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.213902 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" event={"ID":"34990624-9069-46d0-b8b5-03a8b37ef9ae","Type":"ContainerStarted","Data":"87ada93d30b872ca4ada83dbcea3c91e0a03cf076d7240a24946c54eb03e5d2c"} Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.240172 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mpc28" podStartSLOduration=2.459372737 podStartE2EDuration="6.240154573s" podCreationTimestamp="2026-01-06 14:09:59 +0000 UTC" firstStartedPulling="2026-01-06 14:10:00.416425316 +0000 UTC m=+618.956112980" lastFinishedPulling="2026-01-06 14:10:04.197207152 +0000 UTC m=+622.736894816" observedRunningTime="2026-01-06 14:10:05.237342165 +0000 UTC m=+623.777029829" watchObservedRunningTime="2026-01-06 14:10:05.240154573 +0000 UTC m=+623.779842227" Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.261247 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" podStartSLOduration=2.6480025940000003 podStartE2EDuration="6.261230143s" podCreationTimestamp="2026-01-06 14:09:59 +0000 UTC" firstStartedPulling="2026-01-06 14:10:00.583191024 +0000 UTC m=+619.122878678" lastFinishedPulling="2026-01-06 14:10:04.196418563 +0000 UTC m=+622.736106227" observedRunningTime="2026-01-06 14:10:05.259200884 +0000 UTC m=+623.798888558" watchObservedRunningTime="2026-01-06 14:10:05.261230143 +0000 UTC m=+623.800917807" Jan 06 14:10:05 crc kubenswrapper[4869]: I0106 14:10:05.276118 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-622jv" podStartSLOduration=2.655608088 podStartE2EDuration="6.276100863s" podCreationTimestamp="2026-01-06 14:09:59 +0000 UTC" firstStartedPulling="2026-01-06 14:10:00.632629521 +0000 UTC m=+619.172317195" lastFinishedPulling="2026-01-06 14:10:04.253122306 +0000 UTC m=+622.792809970" observedRunningTime="2026-01-06 14:10:05.275426527 +0000 UTC m=+623.815114191" watchObservedRunningTime="2026-01-06 14:10:05.276100863 +0000 UTC m=+623.815788517" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.512415 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2f9tq"] Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513574 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-controller" containerID="cri-o://4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513624 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="nbdb" containerID="cri-o://1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513800 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-acl-logging" containerID="cri-o://6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513766 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="northd" containerID="cri-o://2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513816 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="sbdb" containerID="cri-o://34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513838 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-node" containerID="cri-o://5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.513836 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.574605 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" containerID="cri-o://f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" gracePeriod=30 Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.802262 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/3.log" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.804746 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovn-acl-logging/0.log" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.805233 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovn-controller/0.log" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.805792 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870110 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870139 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870163 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870180 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870209 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870224 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870258 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870256 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870297 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870316 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tjgb6"] Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870337 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870357 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log" (OuterVolumeSpecName: "node-log") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870339 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870357 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870383 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870444 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870525 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870578 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.870580 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870602 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870603 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-857xw\" (UniqueName: \"kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870638 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870686 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870712 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870739 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870779 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes\") pod \"487c527a-7d89-4175-8827-c8cdd6e0211f\" (UID: \"487c527a-7d89-4175-8827-c8cdd6e0211f\") " Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.870827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871056 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871090 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871111 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871129 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash" (OuterVolumeSpecName: "host-slash") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871150 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket" (OuterVolumeSpecName: "log-socket") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871329 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871344 4869 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-node-log\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871357 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871368 4869 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871377 4869 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871391 4869 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871400 4869 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-slash\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871412 4869 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871422 4869 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-log-socket\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871432 4869 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871443 4869 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871453 4869 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871462 4869 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871474 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871511 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.870612 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-acl-logging" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871627 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-acl-logging" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871685 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="sbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871694 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="sbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871705 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871712 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871738 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-node" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871749 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-node" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871757 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="nbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871764 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="nbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871776 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871784 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871798 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="northd" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871804 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="northd" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871815 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871822 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871838 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-ovn-metrics" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871845 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-ovn-metrics" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.871858 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kubecfg-setup" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871864 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kubecfg-setup" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.871880 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872130 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872142 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-ovn-metrics" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872152 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872159 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="northd" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872167 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="nbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872174 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872183 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="kube-rbac-proxy-node" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872190 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872200 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872206 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="sbdb" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872214 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovn-acl-logging" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872246 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.872318 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872328 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: E0106 14:10:09.872338 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872344 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.872443 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerName="ovnkube-controller" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.874441 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.878623 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw" (OuterVolumeSpecName: "kube-api-access-857xw") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "kube-api-access-857xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.878902 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.887372 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "487c527a-7d89-4175-8827-c8cdd6e0211f" (UID: "487c527a-7d89-4175-8827-c8cdd6e0211f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973266 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-ovn\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973322 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-netns\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-slash\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973379 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhlw\" (UniqueName: \"kubernetes.io/projected/1178230d-2cf7-4380-a8ef-dad55c05b4fe-kube-api-access-kzhlw\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973487 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-script-lib\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973532 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-config\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973612 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-systemd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-kubelet\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973660 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-log-socket\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973709 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovn-node-metrics-cert\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-netd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-env-overrides\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-etc-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-var-lib-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973854 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-node-log\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-bin\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.973979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-systemd-units\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974064 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-857xw\" (UniqueName: \"kubernetes.io/projected/487c527a-7d89-4175-8827-c8cdd6e0211f-kube-api-access-857xw\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974079 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/487c527a-7d89-4175-8827-c8cdd6e0211f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974090 4869 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974100 4869 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974113 4869 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/487c527a-7d89-4175-8827-c8cdd6e0211f-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:09 crc kubenswrapper[4869]: I0106 14:10:09.974122 4869 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/487c527a-7d89-4175-8827-c8cdd6e0211f-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075156 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075232 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-bin\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-systemd-units\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-ovn\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-netns\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075393 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-slash\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhlw\" (UniqueName: \"kubernetes.io/projected/1178230d-2cf7-4380-a8ef-dad55c05b4fe-kube-api-access-kzhlw\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-script-lib\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-config\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075598 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-systemd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075728 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-kubelet\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-log-socket\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovn-node-metrics-cert\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075830 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-netd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-env-overrides\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.075891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-var-lib-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076021 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-kubelet\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076090 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-ovn\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076119 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-bin\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-slash\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076231 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-run-netns\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076268 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-systemd-units\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076302 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-cni-netd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076246 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-run-systemd\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076369 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-log-socket\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.076539 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-var-lib-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077178 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-env-overrides\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077547 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-script-lib\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovnkube-config\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-etc-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077891 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.077937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-node-log\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.078053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-node-log\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.078107 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-etc-openvswitch\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.078153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1178230d-2cf7-4380-a8ef-dad55c05b4fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.081507 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1178230d-2cf7-4380-a8ef-dad55c05b4fe-ovn-node-metrics-cert\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.097168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhlw\" (UniqueName: \"kubernetes.io/projected/1178230d-2cf7-4380-a8ef-dad55c05b4fe-kube-api-access-kzhlw\") pod \"ovnkube-node-tjgb6\" (UID: \"1178230d-2cf7-4380-a8ef-dad55c05b4fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.161599 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-ht2rp" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.217807 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.271626 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovnkube-controller/3.log" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.276685 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovn-acl-logging/0.log" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.278722 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2f9tq_487c527a-7d89-4175-8827-c8cdd6e0211f/ovn-controller/0.log" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279419 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279472 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279492 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279513 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279532 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279551 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" exitCode=0 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279569 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" exitCode=143 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279589 4869 generic.go:334] "Generic (PLEG): container finished" podID="487c527a-7d89-4175-8827-c8cdd6e0211f" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" exitCode=143 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279736 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279810 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279877 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279904 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279965 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.279987 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280003 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280021 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280037 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280053 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280068 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280084 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280099 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280121 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280148 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280166 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280182 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280200 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280216 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280230 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280246 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280262 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280280 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280296 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280318 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280343 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280362 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280379 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280394 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280409 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280424 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280439 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280455 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280471 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280488 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280511 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" event={"ID":"487c527a-7d89-4175-8827-c8cdd6e0211f","Type":"ContainerDied","Data":"1b6ec9d9e6372d1dd9a0588bf75844df07980546f4a55993ea1440b7d39cd0cd"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280537 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280557 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280573 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280590 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280605 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280620 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280636 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280651 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280704 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280721 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280759 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.280987 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2f9tq" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.283564 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/2.log" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.284507 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/1.log" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.284716 4869 generic.go:334] "Generic (PLEG): container finished" podID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" containerID="28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6" exitCode=2 Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.284851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerDied","Data":"28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.284915 4869 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.285481 4869 scope.go:117] "RemoveContainer" containerID="28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.285919 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-68bvk_openshift-multus(e40cdd2b-5d24-4ef5-995a-4e09fc90d33c)\"" pod="openshift-multus/multus-68bvk" podUID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.286450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"c00e330adc69c2a443710410a229c358bc5715db9f3df2705a036abd03b769d5"} Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.334885 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.351010 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2f9tq"] Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.354016 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2f9tq"] Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.370595 4869 scope.go:117] "RemoveContainer" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.393019 4869 scope.go:117] "RemoveContainer" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.409994 4869 scope.go:117] "RemoveContainer" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.485585 4869 scope.go:117] "RemoveContainer" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.507687 4869 scope.go:117] "RemoveContainer" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.529054 4869 scope.go:117] "RemoveContainer" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.570440 4869 scope.go:117] "RemoveContainer" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.595968 4869 scope.go:117] "RemoveContainer" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.614209 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.614809 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.614849 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} err="failed to get container status \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.614873 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.615389 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": container with ID starting with eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9 not found: ID does not exist" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.615411 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} err="failed to get container status \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": rpc error: code = NotFound desc = could not find container \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": container with ID starting with eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.615424 4869 scope.go:117] "RemoveContainer" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.615844 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": container with ID starting with 34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3 not found: ID does not exist" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.615864 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} err="failed to get container status \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": rpc error: code = NotFound desc = could not find container \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": container with ID starting with 34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.615876 4869 scope.go:117] "RemoveContainer" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.616130 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": container with ID starting with 1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41 not found: ID does not exist" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616150 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} err="failed to get container status \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": rpc error: code = NotFound desc = could not find container \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": container with ID starting with 1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616163 4869 scope.go:117] "RemoveContainer" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.616379 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": container with ID starting with 2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f not found: ID does not exist" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616400 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} err="failed to get container status \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": rpc error: code = NotFound desc = could not find container \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": container with ID starting with 2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616413 4869 scope.go:117] "RemoveContainer" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.616639 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": container with ID starting with ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb not found: ID does not exist" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616682 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} err="failed to get container status \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": rpc error: code = NotFound desc = could not find container \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": container with ID starting with ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616698 4869 scope.go:117] "RemoveContainer" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.616972 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": container with ID starting with 5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d not found: ID does not exist" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.616996 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} err="failed to get container status \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": rpc error: code = NotFound desc = could not find container \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": container with ID starting with 5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.617008 4869 scope.go:117] "RemoveContainer" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.617238 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": container with ID starting with 6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8 not found: ID does not exist" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.617258 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} err="failed to get container status \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": rpc error: code = NotFound desc = could not find container \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": container with ID starting with 6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.617270 4869 scope.go:117] "RemoveContainer" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.617548 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": container with ID starting with 4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a not found: ID does not exist" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.617592 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} err="failed to get container status \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": rpc error: code = NotFound desc = could not find container \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": container with ID starting with 4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.617604 4869 scope.go:117] "RemoveContainer" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: E0106 14:10:10.618049 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": container with ID starting with 4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e not found: ID does not exist" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618068 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} err="failed to get container status \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": rpc error: code = NotFound desc = could not find container \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": container with ID starting with 4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618083 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618314 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} err="failed to get container status \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618332 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618547 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} err="failed to get container status \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": rpc error: code = NotFound desc = could not find container \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": container with ID starting with eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618567 4869 scope.go:117] "RemoveContainer" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618786 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} err="failed to get container status \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": rpc error: code = NotFound desc = could not find container \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": container with ID starting with 34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.618808 4869 scope.go:117] "RemoveContainer" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619009 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} err="failed to get container status \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": rpc error: code = NotFound desc = could not find container \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": container with ID starting with 1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619030 4869 scope.go:117] "RemoveContainer" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619205 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} err="failed to get container status \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": rpc error: code = NotFound desc = could not find container \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": container with ID starting with 2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619224 4869 scope.go:117] "RemoveContainer" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619474 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} err="failed to get container status \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": rpc error: code = NotFound desc = could not find container \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": container with ID starting with ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619493 4869 scope.go:117] "RemoveContainer" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619739 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} err="failed to get container status \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": rpc error: code = NotFound desc = could not find container \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": container with ID starting with 5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619757 4869 scope.go:117] "RemoveContainer" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619963 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} err="failed to get container status \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": rpc error: code = NotFound desc = could not find container \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": container with ID starting with 6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.619981 4869 scope.go:117] "RemoveContainer" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620199 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} err="failed to get container status \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": rpc error: code = NotFound desc = could not find container \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": container with ID starting with 4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620217 4869 scope.go:117] "RemoveContainer" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620478 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} err="failed to get container status \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": rpc error: code = NotFound desc = could not find container \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": container with ID starting with 4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620499 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620734 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} err="failed to get container status \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.620753 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621088 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} err="failed to get container status \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": rpc error: code = NotFound desc = could not find container \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": container with ID starting with eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621105 4869 scope.go:117] "RemoveContainer" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621353 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} err="failed to get container status \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": rpc error: code = NotFound desc = could not find container \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": container with ID starting with 34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621375 4869 scope.go:117] "RemoveContainer" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621628 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} err="failed to get container status \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": rpc error: code = NotFound desc = could not find container \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": container with ID starting with 1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621647 4869 scope.go:117] "RemoveContainer" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621879 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} err="failed to get container status \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": rpc error: code = NotFound desc = could not find container \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": container with ID starting with 2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.621898 4869 scope.go:117] "RemoveContainer" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622136 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} err="failed to get container status \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": rpc error: code = NotFound desc = could not find container \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": container with ID starting with ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622153 4869 scope.go:117] "RemoveContainer" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622358 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} err="failed to get container status \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": rpc error: code = NotFound desc = could not find container \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": container with ID starting with 5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622376 4869 scope.go:117] "RemoveContainer" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622588 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} err="failed to get container status \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": rpc error: code = NotFound desc = could not find container \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": container with ID starting with 6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622606 4869 scope.go:117] "RemoveContainer" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622810 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} err="failed to get container status \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": rpc error: code = NotFound desc = could not find container \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": container with ID starting with 4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.622828 4869 scope.go:117] "RemoveContainer" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623068 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} err="failed to get container status \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": rpc error: code = NotFound desc = could not find container \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": container with ID starting with 4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623102 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623315 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} err="failed to get container status \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623333 4869 scope.go:117] "RemoveContainer" containerID="eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623547 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9"} err="failed to get container status \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": rpc error: code = NotFound desc = could not find container \"eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9\": container with ID starting with eb693769108066ac95f21a9ce322af06e44139cee3128e22d58c73ab7659faf9 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623566 4869 scope.go:117] "RemoveContainer" containerID="34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623842 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3"} err="failed to get container status \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": rpc error: code = NotFound desc = could not find container \"34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3\": container with ID starting with 34028d81b558ed9a6b94aac87348970eea4c3756aa2d2043d447b4f0fc0643b3 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.623861 4869 scope.go:117] "RemoveContainer" containerID="1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624080 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41"} err="failed to get container status \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": rpc error: code = NotFound desc = could not find container \"1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41\": container with ID starting with 1743402530f3359b56384b277e1fb556d4afad5a689ecf1bdfb340d9f29fbd41 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624107 4869 scope.go:117] "RemoveContainer" containerID="2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624295 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f"} err="failed to get container status \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": rpc error: code = NotFound desc = could not find container \"2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f\": container with ID starting with 2a5818f62d915747d93f9eb30c00f87045ad355aaa78847a3a5f962f3b57f76f not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624319 4869 scope.go:117] "RemoveContainer" containerID="ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624546 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb"} err="failed to get container status \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": rpc error: code = NotFound desc = could not find container \"ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb\": container with ID starting with ec320e7b8d9150ee788e6eb9c55bdace2beeb220a6a2b9e629a2705426aea4eb not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624571 4869 scope.go:117] "RemoveContainer" containerID="5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624967 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d"} err="failed to get container status \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": rpc error: code = NotFound desc = could not find container \"5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d\": container with ID starting with 5fe0ed4d9a68631a85ad7ae23825b50a6d482206c0560e31ebcc07e51b1aa89d not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.624990 4869 scope.go:117] "RemoveContainer" containerID="6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.625279 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8"} err="failed to get container status \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": rpc error: code = NotFound desc = could not find container \"6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8\": container with ID starting with 6760000f2008226f33c230dae2bf8f2848b42a74ae07be2e6821ce4464cc4ed8 not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.625297 4869 scope.go:117] "RemoveContainer" containerID="4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.625717 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a"} err="failed to get container status \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": rpc error: code = NotFound desc = could not find container \"4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a\": container with ID starting with 4f24d9525d7189a121548cf42b774146dbd57ebf47ecbd9ef0cf4e5392e2442a not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.625737 4869 scope.go:117] "RemoveContainer" containerID="4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.626375 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e"} err="failed to get container status \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": rpc error: code = NotFound desc = could not find container \"4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e\": container with ID starting with 4fda6e4ca88e01ea5718c9d109bcbfbe385a01e470678e72cc8ce326dd6c371e not found: ID does not exist" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.626397 4869 scope.go:117] "RemoveContainer" containerID="f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb" Jan 06 14:10:10 crc kubenswrapper[4869]: I0106 14:10:10.626624 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb"} err="failed to get container status \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": rpc error: code = NotFound desc = could not find container \"f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb\": container with ID starting with f7ef77a89c6b985f6b221c48b0fa2c3c5d04bcc31613246ec2617a8206f68fcb not found: ID does not exist" Jan 06 14:10:11 crc kubenswrapper[4869]: I0106 14:10:11.297880 4869 generic.go:334] "Generic (PLEG): container finished" podID="1178230d-2cf7-4380-a8ef-dad55c05b4fe" containerID="0ebffc123f45f6ae0931f4837316a72dce59ee42f18b656189b9e22b5eb4736a" exitCode=0 Jan 06 14:10:11 crc kubenswrapper[4869]: I0106 14:10:11.298016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerDied","Data":"0ebffc123f45f6ae0931f4837316a72dce59ee42f18b656189b9e22b5eb4736a"} Jan 06 14:10:11 crc kubenswrapper[4869]: I0106 14:10:11.722273 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="487c527a-7d89-4175-8827-c8cdd6e0211f" path="/var/lib/kubelet/pods/487c527a-7d89-4175-8827-c8cdd6e0211f/volumes" Jan 06 14:10:12 crc kubenswrapper[4869]: I0106 14:10:12.310578 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"3cea8dabb71168ebdf89cdec0e39b74fa27e75acab54e7a25a34055a8948fc18"} Jan 06 14:10:12 crc kubenswrapper[4869]: I0106 14:10:12.310973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"3fe01665d22b89e28e745f136e63c37f863cc8cc0b12a7b40b2ddebe36ee722b"} Jan 06 14:10:12 crc kubenswrapper[4869]: I0106 14:10:12.310986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"9a295f91d77e85494b05cccc72e8b6dc439a86b1cc2204e57a565bdf679d2aff"} Jan 06 14:10:12 crc kubenswrapper[4869]: I0106 14:10:12.310997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"46bb445c10f075b95eeafcfac360c3ef690a48280a0e30bfbff4a67b1662e844"} Jan 06 14:10:12 crc kubenswrapper[4869]: I0106 14:10:12.311006 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"a298bb31f12666f2e83739c39ba6654f5e3b9d29ef378b5dce70de374eeb5724"} Jan 06 14:10:13 crc kubenswrapper[4869]: I0106 14:10:13.325964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"e898c3f1abe1f8a3fa7bbb3e2fe92fbfac888c908f9bf6e7f278c3504805f379"} Jan 06 14:10:15 crc kubenswrapper[4869]: I0106 14:10:15.348326 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"006d5eee7c06308924d98f0a0e61ce3bf4a26d14288fb21943edb2e78cc44bc1"} Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.366590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" event={"ID":"1178230d-2cf7-4380-a8ef-dad55c05b4fe","Type":"ContainerStarted","Data":"b5042beb79c692f94cc86404f4a37f953855b651b9ad0c3f7415e894e8fb495a"} Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.367268 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.367282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.367291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.396198 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" podStartSLOduration=8.396175431 podStartE2EDuration="8.396175431s" podCreationTimestamp="2026-01-06 14:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:10:17.394475499 +0000 UTC m=+635.934163163" watchObservedRunningTime="2026-01-06 14:10:17.396175431 +0000 UTC m=+635.935863105" Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.405849 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:17 crc kubenswrapper[4869]: I0106 14:10:17.406386 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:23 crc kubenswrapper[4869]: I0106 14:10:23.704574 4869 scope.go:117] "RemoveContainer" containerID="28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6" Jan 06 14:10:23 crc kubenswrapper[4869]: E0106 14:10:23.705739 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-68bvk_openshift-multus(e40cdd2b-5d24-4ef5-995a-4e09fc90d33c)\"" pod="openshift-multus/multus-68bvk" podUID="e40cdd2b-5d24-4ef5-995a-4e09fc90d33c" Jan 06 14:10:34 crc kubenswrapper[4869]: I0106 14:10:34.704810 4869 scope.go:117] "RemoveContainer" containerID="28ab89a767dce736b75ce450ca28d8c5cfff1dd703089e2e14a3e607fb54d1b6" Jan 06 14:10:36 crc kubenswrapper[4869]: I0106 14:10:36.505361 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/2.log" Jan 06 14:10:36 crc kubenswrapper[4869]: I0106 14:10:36.507187 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/1.log" Jan 06 14:10:36 crc kubenswrapper[4869]: I0106 14:10:36.507297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-68bvk" event={"ID":"e40cdd2b-5d24-4ef5-995a-4e09fc90d33c","Type":"ContainerStarted","Data":"e8db3a8cd9ea241c28009c66abd9a9f7d63af4ec3cafaf546656745da2cfa8f5"} Jan 06 14:10:40 crc kubenswrapper[4869]: I0106 14:10:40.251698 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tjgb6" Jan 06 14:10:48 crc kubenswrapper[4869]: I0106 14:10:48.214231 4869 scope.go:117] "RemoveContainer" containerID="4d3985462b751fad731c61b70bd276f0e2c8159ecea877bc89ed7066061842da" Jan 06 14:10:48 crc kubenswrapper[4869]: I0106 14:10:48.581968 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-68bvk_e40cdd2b-5d24-4ef5-995a-4e09fc90d33c/kube-multus/2.log" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.708643 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh"] Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.710278 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.712973 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.720447 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh"] Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.751317 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.751387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.751437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8zb\" (UniqueName: \"kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.854225 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.854631 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.854922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.855051 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.855116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8zb\" (UniqueName: \"kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:52 crc kubenswrapper[4869]: I0106 14:10:52.875499 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8zb\" (UniqueName: \"kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:53 crc kubenswrapper[4869]: I0106 14:10:53.082129 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:10:53 crc kubenswrapper[4869]: I0106 14:10:53.262739 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh"] Jan 06 14:10:53 crc kubenswrapper[4869]: I0106 14:10:53.611312 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerStarted","Data":"3ee77400aa003a2618deabc3daa749a5970ca45321ecea5d8832357cd43b0040"} Jan 06 14:10:53 crc kubenswrapper[4869]: I0106 14:10:53.611359 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerStarted","Data":"7d19d29c3f92d9280bab11f7e82fab3370a7316ba248f46236252cee3f3e850c"} Jan 06 14:10:54 crc kubenswrapper[4869]: I0106 14:10:54.626161 4869 generic.go:334] "Generic (PLEG): container finished" podID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerID="3ee77400aa003a2618deabc3daa749a5970ca45321ecea5d8832357cd43b0040" exitCode=0 Jan 06 14:10:54 crc kubenswrapper[4869]: I0106 14:10:54.626210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerDied","Data":"3ee77400aa003a2618deabc3daa749a5970ca45321ecea5d8832357cd43b0040"} Jan 06 14:10:56 crc kubenswrapper[4869]: I0106 14:10:56.641181 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerStarted","Data":"b1a18686a8bd972e0c9f4610d9c954457824d5a6f50a42a6ce7c74709bf6e0b9"} Jan 06 14:10:57 crc kubenswrapper[4869]: I0106 14:10:57.650701 4869 generic.go:334] "Generic (PLEG): container finished" podID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerID="b1a18686a8bd972e0c9f4610d9c954457824d5a6f50a42a6ce7c74709bf6e0b9" exitCode=0 Jan 06 14:10:57 crc kubenswrapper[4869]: I0106 14:10:57.650761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerDied","Data":"b1a18686a8bd972e0c9f4610d9c954457824d5a6f50a42a6ce7c74709bf6e0b9"} Jan 06 14:10:58 crc kubenswrapper[4869]: I0106 14:10:58.661743 4869 generic.go:334] "Generic (PLEG): container finished" podID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerID="5c36f24866bd73cfa59707328479d6e1120c782eef7057cac93fb7cd563f6c0a" exitCode=0 Jan 06 14:10:58 crc kubenswrapper[4869]: I0106 14:10:58.661807 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerDied","Data":"5c36f24866bd73cfa59707328479d6e1120c782eef7057cac93fb7cd563f6c0a"} Jan 06 14:10:59 crc kubenswrapper[4869]: I0106 14:10:59.949911 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.058209 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util\") pod \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.058441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8zb\" (UniqueName: \"kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb\") pod \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.058472 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle\") pod \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\" (UID: \"78bf3822-20b3-46ab-bdf0-ab7b83b17327\") " Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.059323 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle" (OuterVolumeSpecName: "bundle") pod "78bf3822-20b3-46ab-bdf0-ab7b83b17327" (UID: "78bf3822-20b3-46ab-bdf0-ab7b83b17327"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.065015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb" (OuterVolumeSpecName: "kube-api-access-cr8zb") pod "78bf3822-20b3-46ab-bdf0-ab7b83b17327" (UID: "78bf3822-20b3-46ab-bdf0-ab7b83b17327"). InnerVolumeSpecName "kube-api-access-cr8zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.069159 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util" (OuterVolumeSpecName: "util") pod "78bf3822-20b3-46ab-bdf0-ab7b83b17327" (UID: "78bf3822-20b3-46ab-bdf0-ab7b83b17327"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.160071 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8zb\" (UniqueName: \"kubernetes.io/projected/78bf3822-20b3-46ab-bdf0-ab7b83b17327-kube-api-access-cr8zb\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.160111 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.160125 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/78bf3822-20b3-46ab-bdf0-ab7b83b17327-util\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.689158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" event={"ID":"78bf3822-20b3-46ab-bdf0-ab7b83b17327","Type":"ContainerDied","Data":"7d19d29c3f92d9280bab11f7e82fab3370a7316ba248f46236252cee3f3e850c"} Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.689290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh" Jan 06 14:11:00 crc kubenswrapper[4869]: I0106 14:11:00.689912 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d19d29c3f92d9280bab11f7e82fab3370a7316ba248f46236252cee3f3e850c" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.780545 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4tzkd"] Jan 06 14:11:02 crc kubenswrapper[4869]: E0106 14:11:02.780799 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="extract" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.780812 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="extract" Jan 06 14:11:02 crc kubenswrapper[4869]: E0106 14:11:02.780830 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="pull" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.780836 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="pull" Jan 06 14:11:02 crc kubenswrapper[4869]: E0106 14:11:02.780853 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="util" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.780860 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="util" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.780975 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bf3822-20b3-46ab-bdf0-ab7b83b17327" containerName="extract" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.781413 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.783924 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.784145 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mq6zh" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.784269 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.794951 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4tzkd"] Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.796970 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph4gm\" (UniqueName: \"kubernetes.io/projected/372ffc0b-adac-4b19-82d9-c697aeebdc15-kube-api-access-ph4gm\") pod \"nmstate-operator-6769fb99d-4tzkd\" (UID: \"372ffc0b-adac-4b19-82d9-c697aeebdc15\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.898743 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph4gm\" (UniqueName: \"kubernetes.io/projected/372ffc0b-adac-4b19-82d9-c697aeebdc15-kube-api-access-ph4gm\") pod \"nmstate-operator-6769fb99d-4tzkd\" (UID: \"372ffc0b-adac-4b19-82d9-c697aeebdc15\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" Jan 06 14:11:02 crc kubenswrapper[4869]: I0106 14:11:02.922962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph4gm\" (UniqueName: \"kubernetes.io/projected/372ffc0b-adac-4b19-82d9-c697aeebdc15-kube-api-access-ph4gm\") pod \"nmstate-operator-6769fb99d-4tzkd\" (UID: \"372ffc0b-adac-4b19-82d9-c697aeebdc15\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" Jan 06 14:11:03 crc kubenswrapper[4869]: I0106 14:11:03.100930 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" Jan 06 14:11:03 crc kubenswrapper[4869]: I0106 14:11:03.324863 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-4tzkd"] Jan 06 14:11:03 crc kubenswrapper[4869]: I0106 14:11:03.712937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" event={"ID":"372ffc0b-adac-4b19-82d9-c697aeebdc15","Type":"ContainerStarted","Data":"0778660e556ed67f77f4fe042298df983e02b8ee84e9fced57ce3a50310d496a"} Jan 06 14:11:06 crc kubenswrapper[4869]: I0106 14:11:06.729443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" event={"ID":"372ffc0b-adac-4b19-82d9-c697aeebdc15","Type":"ContainerStarted","Data":"1cb454360b0651e8226946eff0ea5278f6eb9a4a30eeb7967c4dafa096e15bd7"} Jan 06 14:11:06 crc kubenswrapper[4869]: I0106 14:11:06.747880 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-4tzkd" podStartSLOduration=2.5255364609999997 podStartE2EDuration="4.747860275s" podCreationTimestamp="2026-01-06 14:11:02 +0000 UTC" firstStartedPulling="2026-01-06 14:11:03.336576453 +0000 UTC m=+681.876264127" lastFinishedPulling="2026-01-06 14:11:05.558900237 +0000 UTC m=+684.098587941" observedRunningTime="2026-01-06 14:11:06.745738847 +0000 UTC m=+685.285426561" watchObservedRunningTime="2026-01-06 14:11:06.747860275 +0000 UTC m=+685.287547949" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.119500 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.121197 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.123504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pcsc8" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.134039 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.134818 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.139858 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.143435 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbvk\" (UniqueName: \"kubernetes.io/projected/c0737021-333c-4fb1-a387-75dcff62515c-kube-api-access-spbvk\") pod \"nmstate-metrics-7f7f7578db-6fgd7\" (UID: \"c0737021-333c-4fb1-a387-75dcff62515c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.143493 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8cg\" (UniqueName: \"kubernetes.io/projected/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-kube-api-access-9b8cg\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.143527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.144702 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.166467 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8jk6n"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.167479 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.179559 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244602 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-dbus-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244658 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-ovs-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244700 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-nmstate-lock\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbvk\" (UniqueName: \"kubernetes.io/projected/c0737021-333c-4fb1-a387-75dcff62515c-kube-api-access-spbvk\") pod \"nmstate-metrics-7f7f7578db-6fgd7\" (UID: \"c0737021-333c-4fb1-a387-75dcff62515c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8cg\" (UniqueName: \"kubernetes.io/projected/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-kube-api-access-9b8cg\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.244822 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltt7\" (UniqueName: \"kubernetes.io/projected/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-kube-api-access-nltt7\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: E0106 14:11:13.245444 4869 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 06 14:11:13 crc kubenswrapper[4869]: E0106 14:11:13.245505 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair podName:483878fb-3dc5-49d2-8765-5f3cb8cbf8f2 nodeName:}" failed. No retries permitted until 2026-01-06 14:11:13.745482959 +0000 UTC m=+692.285170623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair") pod "nmstate-webhook-f8fb84555-jkxsj" (UID: "483878fb-3dc5-49d2-8765-5f3cb8cbf8f2") : secret "openshift-nmstate-webhook" not found Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.264707 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbvk\" (UniqueName: \"kubernetes.io/projected/c0737021-333c-4fb1-a387-75dcff62515c-kube-api-access-spbvk\") pod \"nmstate-metrics-7f7f7578db-6fgd7\" (UID: \"c0737021-333c-4fb1-a387-75dcff62515c\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.264797 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8cg\" (UniqueName: \"kubernetes.io/projected/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-kube-api-access-9b8cg\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.298913 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.300190 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.302319 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.302619 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.302647 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-98ltt" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.316138 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.346295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.346710 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-dbus-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.346845 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-ovs-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.346948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-nmstate-lock\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.347027 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-nmstate-lock\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.346968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-ovs-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.347200 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-dbus-socket\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.347323 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.347473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltt7\" (UniqueName: \"kubernetes.io/projected/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-kube-api-access-nltt7\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.347575 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brrs\" (UniqueName: \"kubernetes.io/projected/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-kube-api-access-5brrs\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.367026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nltt7\" (UniqueName: \"kubernetes.io/projected/06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0-kube-api-access-nltt7\") pod \"nmstate-handler-8jk6n\" (UID: \"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0\") " pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.445706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.449088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.449208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.449294 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brrs\" (UniqueName: \"kubernetes.io/projected/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-kube-api-access-5brrs\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.450188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: E0106 14:11:13.450289 4869 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 06 14:11:13 crc kubenswrapper[4869]: E0106 14:11:13.450344 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert podName:9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3 nodeName:}" failed. No retries permitted until 2026-01-06 14:11:13.950330662 +0000 UTC m=+692.490018316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-4g7qv" (UID: "9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3") : secret "plugin-serving-cert" not found Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.472973 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brrs\" (UniqueName: \"kubernetes.io/projected/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-kube-api-access-5brrs\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.488427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.495432 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6bcb6bc44f-2zvsc"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.496420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.518811 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcb6bc44f-2zvsc"] Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551014 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-trusted-ca-bundle\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551082 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551147 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-oauth-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-oauth-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551212 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-service-ca\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.551264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wv9l\" (UniqueName: \"kubernetes.io/projected/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-kube-api-access-7wv9l\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.651765 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-service-ca\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wv9l\" (UniqueName: \"kubernetes.io/projected/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-kube-api-access-7wv9l\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-trusted-ca-bundle\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652444 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-oauth-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.652527 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-oauth-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.653831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-oauth-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.654114 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.654160 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-trusted-ca-bundle\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.654796 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-service-ca\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.657296 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-oauth-config\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.657612 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-console-serving-cert\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.678239 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wv9l\" (UniqueName: \"kubernetes.io/projected/01626b9e-ead7-4ccb-ab29-a5ffc33c045f-kube-api-access-7wv9l\") pod \"console-6bcb6bc44f-2zvsc\" (UID: \"01626b9e-ead7-4ccb-ab29-a5ffc33c045f\") " pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.755752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.761279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/483878fb-3dc5-49d2-8765-5f3cb8cbf8f2-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jkxsj\" (UID: \"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.775732 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8jk6n" event={"ID":"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0","Type":"ContainerStarted","Data":"0656cf18de2d549c6ed0160002efb6f6ac1dc7eda23f987b4fb56fc68db397cc"} Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.830428 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.953752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7"] Jan 06 14:11:13 crc kubenswrapper[4869]: W0106 14:11:13.954794 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0737021_333c_4fb1_a387_75dcff62515c.slice/crio-e0b22d6a0c84f6b76bacea29afafe6a564f97dcddedb6dc888a4af7459e5c4a0 WatchSource:0}: Error finding container e0b22d6a0c84f6b76bacea29afafe6a564f97dcddedb6dc888a4af7459e5c4a0: Status 404 returned error can't find the container with id e0b22d6a0c84f6b76bacea29afafe6a564f97dcddedb6dc888a4af7459e5c4a0 Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.957161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:13 crc kubenswrapper[4869]: I0106 14:11:13.963734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-4g7qv\" (UID: \"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.059700 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.218181 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.229370 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6bcb6bc44f-2zvsc"] Jan 06 14:11:14 crc kubenswrapper[4869]: W0106 14:11:14.242440 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01626b9e_ead7_4ccb_ab29_a5ffc33c045f.slice/crio-54f4782da73b166c7efdf2822f77f5b67b6cd1a015d9c611e040cd4b04113c62 WatchSource:0}: Error finding container 54f4782da73b166c7efdf2822f77f5b67b6cd1a015d9c611e040cd4b04113c62: Status 404 returned error can't find the container with id 54f4782da73b166c7efdf2822f77f5b67b6cd1a015d9c611e040cd4b04113c62 Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.431967 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv"] Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.530644 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj"] Jan 06 14:11:14 crc kubenswrapper[4869]: W0106 14:11:14.551541 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod483878fb_3dc5_49d2_8765_5f3cb8cbf8f2.slice/crio-c51489958ff8aa4e344e26c50ab3c6950d827db245767690cdb3cd8f3e876a32 WatchSource:0}: Error finding container c51489958ff8aa4e344e26c50ab3c6950d827db245767690cdb3cd8f3e876a32: Status 404 returned error can't find the container with id c51489958ff8aa4e344e26c50ab3c6950d827db245767690cdb3cd8f3e876a32 Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.790559 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" event={"ID":"c0737021-333c-4fb1-a387-75dcff62515c","Type":"ContainerStarted","Data":"e0b22d6a0c84f6b76bacea29afafe6a564f97dcddedb6dc888a4af7459e5c4a0"} Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.792289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" event={"ID":"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3","Type":"ContainerStarted","Data":"76b68776f88b19a3e3aeb85507a9b4f3b0d17b1b27bc6649227e48961df33115"} Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.794037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb6bc44f-2zvsc" event={"ID":"01626b9e-ead7-4ccb-ab29-a5ffc33c045f","Type":"ContainerStarted","Data":"17cdf5d1def413c1d84bc151307ed36581876269f1b6c2b31c64e6e39f18b2d1"} Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.794129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6bcb6bc44f-2zvsc" event={"ID":"01626b9e-ead7-4ccb-ab29-a5ffc33c045f","Type":"ContainerStarted","Data":"54f4782da73b166c7efdf2822f77f5b67b6cd1a015d9c611e040cd4b04113c62"} Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.796046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" event={"ID":"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2","Type":"ContainerStarted","Data":"c51489958ff8aa4e344e26c50ab3c6950d827db245767690cdb3cd8f3e876a32"} Jan 06 14:11:14 crc kubenswrapper[4869]: I0106 14:11:14.822917 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6bcb6bc44f-2zvsc" podStartSLOduration=1.8228963569999999 podStartE2EDuration="1.822896357s" podCreationTimestamp="2026-01-06 14:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:11:14.819765384 +0000 UTC m=+693.359453068" watchObservedRunningTime="2026-01-06 14:11:14.822896357 +0000 UTC m=+693.362584021" Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.819547 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8jk6n" event={"ID":"06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0","Type":"ContainerStarted","Data":"529e4d004c89685dd13cb34deaa447112c5d88ba6d68431d5ad4e20889ac5c99"} Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.820933 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.822480 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" event={"ID":"c0737021-333c-4fb1-a387-75dcff62515c","Type":"ContainerStarted","Data":"9fa85fdc895c6fb45c57f4aad4335f8ef7dd74febcd1301fbda9c98acb9d4d79"} Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.823905 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" event={"ID":"483878fb-3dc5-49d2-8765-5f3cb8cbf8f2","Type":"ContainerStarted","Data":"9bba1b67a67af594669c3251e9a6d70a174aa209945efc0411854dba2bbe2474"} Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.824326 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.869326 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" podStartSLOduration=2.075522417 podStartE2EDuration="3.869297822s" podCreationTimestamp="2026-01-06 14:11:13 +0000 UTC" firstStartedPulling="2026-01-06 14:11:14.55430651 +0000 UTC m=+693.093994174" lastFinishedPulling="2026-01-06 14:11:16.348081915 +0000 UTC m=+694.887769579" observedRunningTime="2026-01-06 14:11:16.865071565 +0000 UTC m=+695.404759259" watchObservedRunningTime="2026-01-06 14:11:16.869297822 +0000 UTC m=+695.408985496" Jan 06 14:11:16 crc kubenswrapper[4869]: I0106 14:11:16.870271 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8jk6n" podStartSLOduration=1.084629816 podStartE2EDuration="3.870263985s" podCreationTimestamp="2026-01-06 14:11:13 +0000 UTC" firstStartedPulling="2026-01-06 14:11:13.561165877 +0000 UTC m=+692.100853541" lastFinishedPulling="2026-01-06 14:11:16.346800056 +0000 UTC m=+694.886487710" observedRunningTime="2026-01-06 14:11:16.843389858 +0000 UTC m=+695.383077542" watchObservedRunningTime="2026-01-06 14:11:16.870263985 +0000 UTC m=+695.409951649" Jan 06 14:11:17 crc kubenswrapper[4869]: I0106 14:11:17.833801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" event={"ID":"9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3","Type":"ContainerStarted","Data":"f313936a16f52863a58fdc4b50fb75a4e959c15eab25228d4f989f727d896155"} Jan 06 14:11:17 crc kubenswrapper[4869]: I0106 14:11:17.857296 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-4g7qv" podStartSLOduration=1.6509825500000002 podStartE2EDuration="4.857276016s" podCreationTimestamp="2026-01-06 14:11:13 +0000 UTC" firstStartedPulling="2026-01-06 14:11:14.441818417 +0000 UTC m=+692.981506081" lastFinishedPulling="2026-01-06 14:11:17.648111883 +0000 UTC m=+696.187799547" observedRunningTime="2026-01-06 14:11:17.848840303 +0000 UTC m=+696.388527967" watchObservedRunningTime="2026-01-06 14:11:17.857276016 +0000 UTC m=+696.396963680" Jan 06 14:11:19 crc kubenswrapper[4869]: I0106 14:11:19.846719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" event={"ID":"c0737021-333c-4fb1-a387-75dcff62515c","Type":"ContainerStarted","Data":"cf4084242f770f6eae33d410c7bddf6647ecd37f802406dbad80a12795f1f3af"} Jan 06 14:11:19 crc kubenswrapper[4869]: I0106 14:11:19.875047 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6fgd7" podStartSLOduration=1.352511557 podStartE2EDuration="6.875019644s" podCreationTimestamp="2026-01-06 14:11:13 +0000 UTC" firstStartedPulling="2026-01-06 14:11:13.957398135 +0000 UTC m=+692.497085799" lastFinishedPulling="2026-01-06 14:11:19.479906212 +0000 UTC m=+698.019593886" observedRunningTime="2026-01-06 14:11:19.866240083 +0000 UTC m=+698.405927737" watchObservedRunningTime="2026-01-06 14:11:19.875019644 +0000 UTC m=+698.414707308" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.519653 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8jk6n" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.831452 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.831535 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.837015 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.880688 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6bcb6bc44f-2zvsc" Jan 06 14:11:23 crc kubenswrapper[4869]: I0106 14:11:23.930071 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:11:33 crc kubenswrapper[4869]: I0106 14:11:33.622457 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:11:33 crc kubenswrapper[4869]: I0106 14:11:33.623544 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:11:34 crc kubenswrapper[4869]: I0106 14:11:34.069054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jkxsj" Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.795095 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9"] Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.796748 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.801909 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.812553 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9"] Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.978834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.979419 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:46 crc kubenswrapper[4869]: I0106 14:11:46.979466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56f8k\" (UniqueName: \"kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.080421 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.080490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.080532 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56f8k\" (UniqueName: \"kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.081046 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.081139 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.107157 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56f8k\" (UniqueName: \"kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.113721 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:47 crc kubenswrapper[4869]: I0106 14:11:47.412525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9"] Jan 06 14:11:48 crc kubenswrapper[4869]: I0106 14:11:48.042477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" event={"ID":"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56","Type":"ContainerStarted","Data":"ef7a82a88159f2c851935083c67b4da3ba49f7249fb64e50a0e0cf5ebb09ff01"} Jan 06 14:11:48 crc kubenswrapper[4869]: I0106 14:11:48.973933 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b9gld" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" containerID="cri-o://4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f" gracePeriod=15 Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.051546 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerID="3a47ab8c906ae799af0c46ac9ccf0e30733231049b99f73f155f4a9dfd721628" exitCode=0 Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.051590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" event={"ID":"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56","Type":"ContainerDied","Data":"3a47ab8c906ae799af0c46ac9ccf0e30733231049b99f73f155f4a9dfd721628"} Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.375938 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b9gld_959dc13f-609b-4272-abe4-e26a0f79ab8c/console/0.log" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.376030 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419115 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419187 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419342 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dtb6\" (UniqueName: \"kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.419468 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.420345 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.420350 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config" (OuterVolumeSpecName: "console-config") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.420824 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca\") pod \"959dc13f-609b-4272-abe4-e26a0f79ab8c\" (UID: \"959dc13f-609b-4272-abe4-e26a0f79ab8c\") " Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.420810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.421119 4869 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.421144 4869 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.421157 4869 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.421236 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.426574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.426634 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6" (OuterVolumeSpecName: "kube-api-access-5dtb6") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "kube-api-access-5dtb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.426893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "959dc13f-609b-4272-abe4-e26a0f79ab8c" (UID: "959dc13f-609b-4272-abe4-e26a0f79ab8c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.522327 4869 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.522367 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dtb6\" (UniqueName: \"kubernetes.io/projected/959dc13f-609b-4272-abe4-e26a0f79ab8c-kube-api-access-5dtb6\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.522381 4869 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/959dc13f-609b-4272-abe4-e26a0f79ab8c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:49 crc kubenswrapper[4869]: I0106 14:11:49.522391 4869 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/959dc13f-609b-4272-abe4-e26a0f79ab8c-service-ca\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062027 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b9gld_959dc13f-609b-4272-abe4-e26a0f79ab8c/console/0.log" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062088 4869 generic.go:334] "Generic (PLEG): container finished" podID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerID="4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f" exitCode=2 Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062127 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b9gld" event={"ID":"959dc13f-609b-4272-abe4-e26a0f79ab8c","Type":"ContainerDied","Data":"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f"} Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062165 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b9gld" event={"ID":"959dc13f-609b-4272-abe4-e26a0f79ab8c","Type":"ContainerDied","Data":"808cd3bb6b0d93109d8f9993462ccdcaf50858a64d257c007b679212edf5923f"} Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062190 4869 scope.go:117] "RemoveContainer" containerID="4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.062227 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b9gld" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.083699 4869 scope.go:117] "RemoveContainer" containerID="4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f" Jan 06 14:11:50 crc kubenswrapper[4869]: E0106 14:11:50.084573 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f\": container with ID starting with 4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f not found: ID does not exist" containerID="4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.084743 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f"} err="failed to get container status \"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f\": rpc error: code = NotFound desc = could not find container \"4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f\": container with ID starting with 4dd4a4e6dc24f5294b2480e75aa49722ff4f615e44628239407241c3050c5e3f not found: ID does not exist" Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.089326 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:11:50 crc kubenswrapper[4869]: I0106 14:11:50.094340 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b9gld"] Jan 06 14:11:51 crc kubenswrapper[4869]: I0106 14:11:51.077072 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerID="356ff53e462c2dbda29bb8c1405fef3d7e1291768eb1a2c37a395dd441502012" exitCode=0 Jan 06 14:11:51 crc kubenswrapper[4869]: I0106 14:11:51.077161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" event={"ID":"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56","Type":"ContainerDied","Data":"356ff53e462c2dbda29bb8c1405fef3d7e1291768eb1a2c37a395dd441502012"} Jan 06 14:11:51 crc kubenswrapper[4869]: I0106 14:11:51.714764 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" path="/var/lib/kubelet/pods/959dc13f-609b-4272-abe4-e26a0f79ab8c/volumes" Jan 06 14:11:52 crc kubenswrapper[4869]: I0106 14:11:52.086658 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerID="8adad809785c20b1961a62db530abed43a8387f371db1b419ae8f6380d4b44cd" exitCode=0 Jan 06 14:11:52 crc kubenswrapper[4869]: I0106 14:11:52.086714 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" event={"ID":"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56","Type":"ContainerDied","Data":"8adad809785c20b1961a62db530abed43a8387f371db1b419ae8f6380d4b44cd"} Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.341189 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.379196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle\") pod \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.379297 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util\") pod \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.379438 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56f8k\" (UniqueName: \"kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k\") pod \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\" (UID: \"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56\") " Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.380223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle" (OuterVolumeSpecName: "bundle") pod "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" (UID: "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.381307 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.387856 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k" (OuterVolumeSpecName: "kube-api-access-56f8k") pod "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" (UID: "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56"). InnerVolumeSpecName "kube-api-access-56f8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.392806 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util" (OuterVolumeSpecName: "util") pod "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" (UID: "aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.482020 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-util\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:53 crc kubenswrapper[4869]: I0106 14:11:53.482087 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56f8k\" (UniqueName: \"kubernetes.io/projected/aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56-kube-api-access-56f8k\") on node \"crc\" DevicePath \"\"" Jan 06 14:11:54 crc kubenswrapper[4869]: I0106 14:11:54.100258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" event={"ID":"aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56","Type":"ContainerDied","Data":"ef7a82a88159f2c851935083c67b4da3ba49f7249fb64e50a0e0cf5ebb09ff01"} Jan 06 14:11:54 crc kubenswrapper[4869]: I0106 14:11:54.100768 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef7a82a88159f2c851935083c67b4da3ba49f7249fb64e50a0e0cf5ebb09ff01" Jan 06 14:11:54 crc kubenswrapper[4869]: I0106 14:11:54.100336 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.122117 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65"] Jan 06 14:12:02 crc kubenswrapper[4869]: E0106 14:12:02.122928 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="extract" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.122941 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="extract" Jan 06 14:12:02 crc kubenswrapper[4869]: E0106 14:12:02.122953 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="pull" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.122960 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="pull" Jan 06 14:12:02 crc kubenswrapper[4869]: E0106 14:12:02.122977 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="util" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.122984 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="util" Jan 06 14:12:02 crc kubenswrapper[4869]: E0106 14:12:02.123012 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.123018 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.123151 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="959dc13f-609b-4272-abe4-e26a0f79ab8c" containerName="console" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.123163 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56" containerName="extract" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.123656 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.127594 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.127803 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.127806 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gqd2q" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.129215 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.133005 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.150451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65"] Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.300531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcr4\" (UniqueName: \"kubernetes.io/projected/81de01b0-a48a-4ca7-8509-9d12c5cb27da-kube-api-access-fgcr4\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.300718 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-webhook-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.300825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.383723 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b"] Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.384743 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.387871 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-sqc55" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.388488 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.388688 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.401857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcr4\" (UniqueName: \"kubernetes.io/projected/81de01b0-a48a-4ca7-8509-9d12c5cb27da-kube-api-access-fgcr4\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.401918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-webhook-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.401952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.411209 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-webhook-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.413137 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b"] Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.416273 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/81de01b0-a48a-4ca7-8509-9d12c5cb27da-apiservice-cert\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.432935 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcr4\" (UniqueName: \"kubernetes.io/projected/81de01b0-a48a-4ca7-8509-9d12c5cb27da-kube-api-access-fgcr4\") pod \"metallb-operator-controller-manager-6ccb949c7b-7jw65\" (UID: \"81de01b0-a48a-4ca7-8509-9d12c5cb27da\") " pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.440630 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.502769 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndpcw\" (UniqueName: \"kubernetes.io/projected/11640b38-620f-4dd4-b9b8-68c84cef4a48-kube-api-access-ndpcw\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.503173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-webhook-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.503205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-apiservice-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.604273 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-webhook-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.604344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-apiservice-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.604459 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndpcw\" (UniqueName: \"kubernetes.io/projected/11640b38-620f-4dd4-b9b8-68c84cef4a48-kube-api-access-ndpcw\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.609067 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-apiservice-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.612242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/11640b38-620f-4dd4-b9b8-68c84cef4a48-webhook-cert\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.630006 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndpcw\" (UniqueName: \"kubernetes.io/projected/11640b38-620f-4dd4-b9b8-68c84cef4a48-kube-api-access-ndpcw\") pod \"metallb-operator-webhook-server-569fbf4bc-hnc5b\" (UID: \"11640b38-620f-4dd4-b9b8-68c84cef4a48\") " pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:02 crc kubenswrapper[4869]: I0106 14:12:02.700143 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:03 crc kubenswrapper[4869]: I0106 14:12:03.061708 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65"] Jan 06 14:12:03 crc kubenswrapper[4869]: W0106 14:12:03.069401 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81de01b0_a48a_4ca7_8509_9d12c5cb27da.slice/crio-0f168ef1cdabff0aeb01ecea15ec06e463d42d3f84bcf1e251c9259071781d81 WatchSource:0}: Error finding container 0f168ef1cdabff0aeb01ecea15ec06e463d42d3f84bcf1e251c9259071781d81: Status 404 returned error can't find the container with id 0f168ef1cdabff0aeb01ecea15ec06e463d42d3f84bcf1e251c9259071781d81 Jan 06 14:12:03 crc kubenswrapper[4869]: I0106 14:12:03.158756 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" event={"ID":"81de01b0-a48a-4ca7-8509-9d12c5cb27da","Type":"ContainerStarted","Data":"0f168ef1cdabff0aeb01ecea15ec06e463d42d3f84bcf1e251c9259071781d81"} Jan 06 14:12:03 crc kubenswrapper[4869]: I0106 14:12:03.338943 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b"] Jan 06 14:12:03 crc kubenswrapper[4869]: W0106 14:12:03.340349 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11640b38_620f_4dd4_b9b8_68c84cef4a48.slice/crio-28c320c07e649509be4993d98108d32247a1d2600b89949ac69a33c5134fd07b WatchSource:0}: Error finding container 28c320c07e649509be4993d98108d32247a1d2600b89949ac69a33c5134fd07b: Status 404 returned error can't find the container with id 28c320c07e649509be4993d98108d32247a1d2600b89949ac69a33c5134fd07b Jan 06 14:12:03 crc kubenswrapper[4869]: I0106 14:12:03.622073 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:12:03 crc kubenswrapper[4869]: I0106 14:12:03.622134 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:12:04 crc kubenswrapper[4869]: I0106 14:12:04.193598 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" event={"ID":"11640b38-620f-4dd4-b9b8-68c84cef4a48","Type":"ContainerStarted","Data":"28c320c07e649509be4993d98108d32247a1d2600b89949ac69a33c5134fd07b"} Jan 06 14:12:07 crc kubenswrapper[4869]: I0106 14:12:07.222404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" event={"ID":"81de01b0-a48a-4ca7-8509-9d12c5cb27da","Type":"ContainerStarted","Data":"159c2e71bc756c346e605646e62c9144596d3721107de6825b074fec2cb80137"} Jan 06 14:12:07 crc kubenswrapper[4869]: I0106 14:12:07.223545 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:07 crc kubenswrapper[4869]: I0106 14:12:07.252444 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" podStartSLOduration=1.653317322 podStartE2EDuration="5.252425209s" podCreationTimestamp="2026-01-06 14:12:02 +0000 UTC" firstStartedPulling="2026-01-06 14:12:03.072304865 +0000 UTC m=+741.611992529" lastFinishedPulling="2026-01-06 14:12:06.671412752 +0000 UTC m=+745.211100416" observedRunningTime="2026-01-06 14:12:07.249021424 +0000 UTC m=+745.788709108" watchObservedRunningTime="2026-01-06 14:12:07.252425209 +0000 UTC m=+745.792112873" Jan 06 14:12:09 crc kubenswrapper[4869]: I0106 14:12:09.236452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" event={"ID":"11640b38-620f-4dd4-b9b8-68c84cef4a48","Type":"ContainerStarted","Data":"c4b46c3f4b50062b0ed7da4ccd37ecdb36907dfe324c86bf41202768393bce1f"} Jan 06 14:12:09 crc kubenswrapper[4869]: I0106 14:12:09.237801 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:09 crc kubenswrapper[4869]: I0106 14:12:09.266964 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" podStartSLOduration=1.8305926719999999 podStartE2EDuration="7.266944819s" podCreationTimestamp="2026-01-06 14:12:02 +0000 UTC" firstStartedPulling="2026-01-06 14:12:03.343457585 +0000 UTC m=+741.883145249" lastFinishedPulling="2026-01-06 14:12:08.779809732 +0000 UTC m=+747.319497396" observedRunningTime="2026-01-06 14:12:09.257814671 +0000 UTC m=+747.797502335" watchObservedRunningTime="2026-01-06 14:12:09.266944819 +0000 UTC m=+747.806632483" Jan 06 14:12:22 crc kubenswrapper[4869]: I0106 14:12:22.705311 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-569fbf4bc-hnc5b" Jan 06 14:12:25 crc kubenswrapper[4869]: I0106 14:12:25.853947 4869 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 06 14:12:33 crc kubenswrapper[4869]: I0106 14:12:33.623002 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:12:33 crc kubenswrapper[4869]: I0106 14:12:33.623631 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:12:33 crc kubenswrapper[4869]: I0106 14:12:33.623718 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:12:33 crc kubenswrapper[4869]: I0106 14:12:33.624336 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:12:33 crc kubenswrapper[4869]: I0106 14:12:33.624396 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee" gracePeriod=600 Jan 06 14:12:34 crc kubenswrapper[4869]: I0106 14:12:34.392842 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee" exitCode=0 Jan 06 14:12:34 crc kubenswrapper[4869]: I0106 14:12:34.392937 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee"} Jan 06 14:12:34 crc kubenswrapper[4869]: I0106 14:12:34.393552 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed"} Jan 06 14:12:34 crc kubenswrapper[4869]: I0106 14:12:34.393639 4869 scope.go:117] "RemoveContainer" containerID="30832d18b90f5a6f313dd7444f9ee97e789e86d3f14416ed00953fef6c868254" Jan 06 14:12:42 crc kubenswrapper[4869]: I0106 14:12:42.444484 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.210060 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.210893 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.213979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.214250 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-z6whb" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.222272 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-svd9m"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.225533 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.228429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.230925 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.232800 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.241795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-startup\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.241858 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-reloader\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.241899 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lqb\" (UniqueName: \"kubernetes.io/projected/a203c019-10b7-4654-9c6f-7e8f535f4a31-kube-api-access-q6lqb\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.241941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.241966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7e6385-475b-4452-bdd5-f83763ba1484-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.242001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-conf\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.242035 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhrm\" (UniqueName: \"kubernetes.io/projected/ea7e6385-475b-4452-bdd5-f83763ba1484-kube-api-access-zlhrm\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.242069 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.242115 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-sockets\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.301957 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v6b2h"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.303483 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.305355 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.305898 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bww28" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.308095 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.308153 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.320533 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-2jc5d"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.321861 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.328793 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.335795 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-2jc5d"] Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343393 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-cert\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343469 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7e6385-475b-4452-bdd5-f83763ba1484-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343535 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-conf\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcc6k\" (UniqueName: \"kubernetes.io/projected/10efadba-cbe5-447f-8a14-768c3dbabe59-kube-api-access-dcc6k\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-metrics-certs\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlhrm\" (UniqueName: \"kubernetes.io/projected/ea7e6385-475b-4452-bdd5-f83763ba1484-kube-api-access-zlhrm\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343652 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343692 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10efadba-cbe5-447f-8a14-768c3dbabe59-metallb-excludel2\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343792 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-sockets\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343817 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-metrics-certs\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-startup\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-reloader\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343909 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc8w\" (UniqueName: \"kubernetes.io/projected/ce1c6386-c701-4846-8a6c-e04c4057862e-kube-api-access-dnc8w\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.343935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6lqb\" (UniqueName: \"kubernetes.io/projected/a203c019-10b7-4654-9c6f-7e8f535f4a31-kube-api-access-q6lqb\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.344463 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-conf\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.344636 4869 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.344739 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs podName:a203c019-10b7-4654-9c6f-7e8f535f4a31 nodeName:}" failed. No retries permitted until 2026-01-06 14:12:43.844713257 +0000 UTC m=+782.384400921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs") pod "frr-k8s-svd9m" (UID: "a203c019-10b7-4654-9c6f-7e8f535f4a31") : secret "frr-k8s-certs-secret" not found Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.344833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-reloader\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.344880 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-sockets\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.345130 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.345623 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a203c019-10b7-4654-9c6f-7e8f535f4a31-frr-startup\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.369386 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlhrm\" (UniqueName: \"kubernetes.io/projected/ea7e6385-475b-4452-bdd5-f83763ba1484-kube-api-access-zlhrm\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.370177 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ea7e6385-475b-4452-bdd5-f83763ba1484-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dzkl6\" (UID: \"ea7e6385-475b-4452-bdd5-f83763ba1484\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.373505 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6lqb\" (UniqueName: \"kubernetes.io/projected/a203c019-10b7-4654-9c6f-7e8f535f4a31-kube-api-access-q6lqb\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445477 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-cert\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcc6k\" (UniqueName: \"kubernetes.io/projected/10efadba-cbe5-447f-8a14-768c3dbabe59-kube-api-access-dcc6k\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445593 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-metrics-certs\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445628 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10efadba-cbe5-447f-8a14-768c3dbabe59-metallb-excludel2\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445717 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-metrics-certs\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.445774 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnc8w\" (UniqueName: \"kubernetes.io/projected/ce1c6386-c701-4846-8a6c-e04c4057862e-kube-api-access-dnc8w\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.446034 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.446096 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist podName:10efadba-cbe5-447f-8a14-768c3dbabe59 nodeName:}" failed. No retries permitted until 2026-01-06 14:12:43.946077515 +0000 UTC m=+782.485765179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist") pod "speaker-v6b2h" (UID: "10efadba-cbe5-447f-8a14-768c3dbabe59") : secret "metallb-memberlist" not found Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.446979 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/10efadba-cbe5-447f-8a14-768c3dbabe59-metallb-excludel2\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.449338 4869 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.450952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-metrics-certs\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.451252 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-metrics-certs\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.460380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce1c6386-c701-4846-8a6c-e04c4057862e-cert\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.463721 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcc6k\" (UniqueName: \"kubernetes.io/projected/10efadba-cbe5-447f-8a14-768c3dbabe59-kube-api-access-dcc6k\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.463769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnc8w\" (UniqueName: \"kubernetes.io/projected/ce1c6386-c701-4846-8a6c-e04c4057862e-kube-api-access-dnc8w\") pod \"controller-5bddd4b946-2jc5d\" (UID: \"ce1c6386-c701-4846-8a6c-e04c4057862e\") " pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.532431 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.634474 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.775707 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6"] Jan 06 14:12:43 crc kubenswrapper[4869]: W0106 14:12:43.784562 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7e6385_475b_4452_bdd5_f83763ba1484.slice/crio-7c1ac9a3590b814244ed295616f79dea6996d4c3a58db15af45da1b02a0ee2a0 WatchSource:0}: Error finding container 7c1ac9a3590b814244ed295616f79dea6996d4c3a58db15af45da1b02a0ee2a0: Status 404 returned error can't find the container with id 7c1ac9a3590b814244ed295616f79dea6996d4c3a58db15af45da1b02a0ee2a0 Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.850957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.855763 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a203c019-10b7-4654-9c6f-7e8f535f4a31-metrics-certs\") pod \"frr-k8s-svd9m\" (UID: \"a203c019-10b7-4654-9c6f-7e8f535f4a31\") " pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.873930 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-2jc5d"] Jan 06 14:12:43 crc kubenswrapper[4869]: W0106 14:12:43.882109 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce1c6386_c701_4846_8a6c_e04c4057862e.slice/crio-c02206ac3430a75241dbf5845353acf1c3f7a962a95161c5d24ce1a5f2cf9595 WatchSource:0}: Error finding container c02206ac3430a75241dbf5845353acf1c3f7a962a95161c5d24ce1a5f2cf9595: Status 404 returned error can't find the container with id c02206ac3430a75241dbf5845353acf1c3f7a962a95161c5d24ce1a5f2cf9595 Jan 06 14:12:43 crc kubenswrapper[4869]: I0106 14:12:43.956177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.956397 4869 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 06 14:12:43 crc kubenswrapper[4869]: E0106 14:12:43.956492 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist podName:10efadba-cbe5-447f-8a14-768c3dbabe59 nodeName:}" failed. No retries permitted until 2026-01-06 14:12:44.956470594 +0000 UTC m=+783.496158258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist") pod "speaker-v6b2h" (UID: "10efadba-cbe5-447f-8a14-768c3dbabe59") : secret "metallb-memberlist" not found Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.137584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" event={"ID":"ea7e6385-475b-4452-bdd5-f83763ba1484","Type":"ContainerStarted","Data":"7c1ac9a3590b814244ed295616f79dea6996d4c3a58db15af45da1b02a0ee2a0"} Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.140846 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-2jc5d" event={"ID":"ce1c6386-c701-4846-8a6c-e04c4057862e","Type":"ContainerStarted","Data":"1deb75f7bdffbd41d04453eff26060d897560d59ecc9c68ffcf9832f442a2771"} Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.140928 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-2jc5d" event={"ID":"ce1c6386-c701-4846-8a6c-e04c4057862e","Type":"ContainerStarted","Data":"c02206ac3430a75241dbf5845353acf1c3f7a962a95161c5d24ce1a5f2cf9595"} Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.146894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.972073 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:44 crc kubenswrapper[4869]: I0106 14:12:44.978168 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/10efadba-cbe5-447f-8a14-768c3dbabe59-memberlist\") pod \"speaker-v6b2h\" (UID: \"10efadba-cbe5-447f-8a14-768c3dbabe59\") " pod="metallb-system/speaker-v6b2h" Jan 06 14:12:45 crc kubenswrapper[4869]: I0106 14:12:45.117288 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v6b2h" Jan 06 14:12:45 crc kubenswrapper[4869]: I0106 14:12:45.149249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"9f35605007d9628c418fc5bc240cbdaac842c8924921a57f437eb4508d7411e6"} Jan 06 14:12:45 crc kubenswrapper[4869]: I0106 14:12:45.164748 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-2jc5d" event={"ID":"ce1c6386-c701-4846-8a6c-e04c4057862e","Type":"ContainerStarted","Data":"8a1d5171ec1f515db37308660cb23afede525a9180d3a423f1123ea8e37af062"} Jan 06 14:12:45 crc kubenswrapper[4869]: I0106 14:12:45.164899 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:12:45 crc kubenswrapper[4869]: I0106 14:12:45.206510 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-2jc5d" podStartSLOduration=2.206488623 podStartE2EDuration="2.206488623s" podCreationTimestamp="2026-01-06 14:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:12:45.201002816 +0000 UTC m=+783.740690480" watchObservedRunningTime="2026-01-06 14:12:45.206488623 +0000 UTC m=+783.746176287" Jan 06 14:12:46 crc kubenswrapper[4869]: I0106 14:12:46.175511 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6b2h" event={"ID":"10efadba-cbe5-447f-8a14-768c3dbabe59","Type":"ContainerStarted","Data":"4d38dfaf8bfa3d129d5331f2ec6cecb20a99fb8e13ca991d0d5015d36c8bc4f8"} Jan 06 14:12:46 crc kubenswrapper[4869]: I0106 14:12:46.175967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6b2h" event={"ID":"10efadba-cbe5-447f-8a14-768c3dbabe59","Type":"ContainerStarted","Data":"ec52f68a8d151ffbd8b69c2370c55514787d0cb74b146a827771ea05f70f4052"} Jan 06 14:12:46 crc kubenswrapper[4869]: I0106 14:12:46.175989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6b2h" event={"ID":"10efadba-cbe5-447f-8a14-768c3dbabe59","Type":"ContainerStarted","Data":"db07e9f8e7bf28a11b90c6316eb3e888ce5f6a44450748d3c101375df9c70abe"} Jan 06 14:12:46 crc kubenswrapper[4869]: I0106 14:12:46.176196 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v6b2h" Jan 06 14:12:46 crc kubenswrapper[4869]: I0106 14:12:46.198264 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v6b2h" podStartSLOduration=3.198244255 podStartE2EDuration="3.198244255s" podCreationTimestamp="2026-01-06 14:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:12:46.196742237 +0000 UTC m=+784.736429931" watchObservedRunningTime="2026-01-06 14:12:46.198244255 +0000 UTC m=+784.737931919" Jan 06 14:12:52 crc kubenswrapper[4869]: I0106 14:12:52.237050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" event={"ID":"ea7e6385-475b-4452-bdd5-f83763ba1484","Type":"ContainerStarted","Data":"881b41da73d5097cfce4da43dbbbbe651f1712c243ed39351dead0d78a6409e7"} Jan 06 14:12:52 crc kubenswrapper[4869]: I0106 14:12:52.238037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:12:52 crc kubenswrapper[4869]: I0106 14:12:52.242880 4869 generic.go:334] "Generic (PLEG): container finished" podID="a203c019-10b7-4654-9c6f-7e8f535f4a31" containerID="f528f89f0d0490009fcde349c14e4b7c409ec04d9a6b2a1c3077c72265e0cbe2" exitCode=0 Jan 06 14:12:52 crc kubenswrapper[4869]: I0106 14:12:52.242919 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerDied","Data":"f528f89f0d0490009fcde349c14e4b7c409ec04d9a6b2a1c3077c72265e0cbe2"} Jan 06 14:12:52 crc kubenswrapper[4869]: I0106 14:12:52.264385 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" podStartSLOduration=1.146612936 podStartE2EDuration="9.264355052s" podCreationTimestamp="2026-01-06 14:12:43 +0000 UTC" firstStartedPulling="2026-01-06 14:12:43.788900598 +0000 UTC m=+782.328588262" lastFinishedPulling="2026-01-06 14:12:51.906642704 +0000 UTC m=+790.446330378" observedRunningTime="2026-01-06 14:12:52.255736845 +0000 UTC m=+790.795424599" watchObservedRunningTime="2026-01-06 14:12:52.264355052 +0000 UTC m=+790.804042756" Jan 06 14:12:53 crc kubenswrapper[4869]: I0106 14:12:53.253181 4869 generic.go:334] "Generic (PLEG): container finished" podID="a203c019-10b7-4654-9c6f-7e8f535f4a31" containerID="b7ab079c153af3163e0d5b6a9ec081aef6070adb2b4700bb1584367f64aa12af" exitCode=0 Jan 06 14:12:53 crc kubenswrapper[4869]: I0106 14:12:53.253283 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerDied","Data":"b7ab079c153af3163e0d5b6a9ec081aef6070adb2b4700bb1584367f64aa12af"} Jan 06 14:12:54 crc kubenswrapper[4869]: I0106 14:12:54.266817 4869 generic.go:334] "Generic (PLEG): container finished" podID="a203c019-10b7-4654-9c6f-7e8f535f4a31" containerID="ca921da6944b91b9e71b91c137bfd53406fa7201cb1e83fca40ebe45335e257b" exitCode=0 Jan 06 14:12:54 crc kubenswrapper[4869]: I0106 14:12:54.266885 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerDied","Data":"ca921da6944b91b9e71b91c137bfd53406fa7201cb1e83fca40ebe45335e257b"} Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.122634 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v6b2h" Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.280259 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"9503de2ccd920219576a48d9274e188727294383afb3a895c77f29aa90c9eb91"} Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.280313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"ef8c2d675c9ef04581de3fa44becaa2b33e248cb72c7ca8e34624382562fea24"} Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.280322 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"55a7431b98c3bb8c9dd31ad172501e4f4232b15729047fe429c4a7a6aef0f355"} Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.280330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"735efc9b86d2bdf3be5c78d60bc9254efb49ef716431b5093f8cda5fe4e5ce5c"} Jan 06 14:12:55 crc kubenswrapper[4869]: I0106 14:12:55.280341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"735586aee68f15da2b32de588c8b19fc64450451caf4208a5213451065109ea8"} Jan 06 14:12:56 crc kubenswrapper[4869]: I0106 14:12:56.292516 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-svd9m" event={"ID":"a203c019-10b7-4654-9c6f-7e8f535f4a31","Type":"ContainerStarted","Data":"3364a74efda565a82a8de6c4087ea36880c166599f45a2d0f8bd0a8e1777238f"} Jan 06 14:12:56 crc kubenswrapper[4869]: I0106 14:12:56.293166 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:56 crc kubenswrapper[4869]: I0106 14:12:56.321526 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-svd9m" podStartSLOduration=5.662272959 podStartE2EDuration="13.321507856s" podCreationTimestamp="2026-01-06 14:12:43 +0000 UTC" firstStartedPulling="2026-01-06 14:12:44.264758163 +0000 UTC m=+782.804445827" lastFinishedPulling="2026-01-06 14:12:51.92399306 +0000 UTC m=+790.463680724" observedRunningTime="2026-01-06 14:12:56.318127772 +0000 UTC m=+794.857815446" watchObservedRunningTime="2026-01-06 14:12:56.321507856 +0000 UTC m=+794.861195510" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.123994 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.125547 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.127894 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9dzws" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.135286 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.136125 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.136824 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.209194 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wm9\" (UniqueName: \"kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9\") pod \"openstack-operator-index-46qfr\" (UID: \"f8af8b76-fdfa-485e-861d-ca2339418945\") " pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.311213 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wm9\" (UniqueName: \"kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9\") pod \"openstack-operator-index-46qfr\" (UID: \"f8af8b76-fdfa-485e-861d-ca2339418945\") " pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.334762 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wm9\" (UniqueName: \"kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9\") pod \"openstack-operator-index-46qfr\" (UID: \"f8af8b76-fdfa-485e-861d-ca2339418945\") " pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.453324 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:12:58 crc kubenswrapper[4869]: I0106 14:12:58.661237 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:12:58 crc kubenswrapper[4869]: W0106 14:12:58.673896 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8af8b76_fdfa_485e_861d_ca2339418945.slice/crio-d6ea21fad7ede245fe0abbbcf5b8175adbcb369cdd954062798d738d067275af WatchSource:0}: Error finding container d6ea21fad7ede245fe0abbbcf5b8175adbcb369cdd954062798d738d067275af: Status 404 returned error can't find the container with id d6ea21fad7ede245fe0abbbcf5b8175adbcb369cdd954062798d738d067275af Jan 06 14:12:59 crc kubenswrapper[4869]: I0106 14:12:59.148122 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:59 crc kubenswrapper[4869]: I0106 14:12:59.197745 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:12:59 crc kubenswrapper[4869]: I0106 14:12:59.333439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46qfr" event={"ID":"f8af8b76-fdfa-485e-861d-ca2339418945","Type":"ContainerStarted","Data":"d6ea21fad7ede245fe0abbbcf5b8175adbcb369cdd954062798d738d067275af"} Jan 06 14:13:01 crc kubenswrapper[4869]: I0106 14:13:01.500887 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.118907 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6lgb5"] Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.120819 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.125519 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6lgb5"] Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.184224 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2c5g\" (UniqueName: \"kubernetes.io/projected/1ef55df8-93a5-440e-a53d-1c4b3eea7d0e-kube-api-access-k2c5g\") pod \"openstack-operator-index-6lgb5\" (UID: \"1ef55df8-93a5-440e-a53d-1c4b3eea7d0e\") " pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.286414 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2c5g\" (UniqueName: \"kubernetes.io/projected/1ef55df8-93a5-440e-a53d-1c4b3eea7d0e-kube-api-access-k2c5g\") pod \"openstack-operator-index-6lgb5\" (UID: \"1ef55df8-93a5-440e-a53d-1c4b3eea7d0e\") " pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.319465 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2c5g\" (UniqueName: \"kubernetes.io/projected/1ef55df8-93a5-440e-a53d-1c4b3eea7d0e-kube-api-access-k2c5g\") pod \"openstack-operator-index-6lgb5\" (UID: \"1ef55df8-93a5-440e-a53d-1c4b3eea7d0e\") " pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.356881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46qfr" event={"ID":"f8af8b76-fdfa-485e-861d-ca2339418945","Type":"ContainerStarted","Data":"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a"} Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.357018 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-46qfr" podUID="f8af8b76-fdfa-485e-861d-ca2339418945" containerName="registry-server" containerID="cri-o://6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a" gracePeriod=2 Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.380127 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-46qfr" podStartSLOduration=1.095969634 podStartE2EDuration="4.380109494s" podCreationTimestamp="2026-01-06 14:12:58 +0000 UTC" firstStartedPulling="2026-01-06 14:12:58.676054731 +0000 UTC m=+797.215742395" lastFinishedPulling="2026-01-06 14:13:01.960194591 +0000 UTC m=+800.499882255" observedRunningTime="2026-01-06 14:13:02.37595972 +0000 UTC m=+800.915647384" watchObservedRunningTime="2026-01-06 14:13:02.380109494 +0000 UTC m=+800.919797158" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.444920 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.796984 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.895101 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wm9\" (UniqueName: \"kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9\") pod \"f8af8b76-fdfa-485e-861d-ca2339418945\" (UID: \"f8af8b76-fdfa-485e-861d-ca2339418945\") " Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.898555 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6lgb5"] Jan 06 14:13:02 crc kubenswrapper[4869]: W0106 14:13:02.899686 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef55df8_93a5_440e_a53d_1c4b3eea7d0e.slice/crio-2dc263c573d6f4ac78b1a5dae22aefd01dc412cfeb8daa4976daf12fbeda4bc2 WatchSource:0}: Error finding container 2dc263c573d6f4ac78b1a5dae22aefd01dc412cfeb8daa4976daf12fbeda4bc2: Status 404 returned error can't find the container with id 2dc263c573d6f4ac78b1a5dae22aefd01dc412cfeb8daa4976daf12fbeda4bc2 Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.902777 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9" (OuterVolumeSpecName: "kube-api-access-87wm9") pod "f8af8b76-fdfa-485e-861d-ca2339418945" (UID: "f8af8b76-fdfa-485e-861d-ca2339418945"). InnerVolumeSpecName "kube-api-access-87wm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:13:02 crc kubenswrapper[4869]: I0106 14:13:02.997452 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wm9\" (UniqueName: \"kubernetes.io/projected/f8af8b76-fdfa-485e-861d-ca2339418945-kube-api-access-87wm9\") on node \"crc\" DevicePath \"\"" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.371880 4869 generic.go:334] "Generic (PLEG): container finished" podID="f8af8b76-fdfa-485e-861d-ca2339418945" containerID="6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a" exitCode=0 Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.371952 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-46qfr" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.371977 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46qfr" event={"ID":"f8af8b76-fdfa-485e-861d-ca2339418945","Type":"ContainerDied","Data":"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a"} Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.372418 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-46qfr" event={"ID":"f8af8b76-fdfa-485e-861d-ca2339418945","Type":"ContainerDied","Data":"d6ea21fad7ede245fe0abbbcf5b8175adbcb369cdd954062798d738d067275af"} Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.372444 4869 scope.go:117] "RemoveContainer" containerID="6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.374138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lgb5" event={"ID":"1ef55df8-93a5-440e-a53d-1c4b3eea7d0e","Type":"ContainerStarted","Data":"83f45a87ae7896f11a49579133288a7f9cf58e6ff2216344282fcfa2f7dc2b67"} Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.374396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6lgb5" event={"ID":"1ef55df8-93a5-440e-a53d-1c4b3eea7d0e","Type":"ContainerStarted","Data":"2dc263c573d6f4ac78b1a5dae22aefd01dc412cfeb8daa4976daf12fbeda4bc2"} Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.402228 4869 scope.go:117] "RemoveContainer" containerID="6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.403553 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6lgb5" podStartSLOduration=1.3562123640000001 podStartE2EDuration="1.403534949s" podCreationTimestamp="2026-01-06 14:13:02 +0000 UTC" firstStartedPulling="2026-01-06 14:13:02.903733755 +0000 UTC m=+801.443421419" lastFinishedPulling="2026-01-06 14:13:02.95105634 +0000 UTC m=+801.490744004" observedRunningTime="2026-01-06 14:13:03.39799551 +0000 UTC m=+801.937683184" watchObservedRunningTime="2026-01-06 14:13:03.403534949 +0000 UTC m=+801.943222613" Jan 06 14:13:03 crc kubenswrapper[4869]: E0106 14:13:03.403622 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a\": container with ID starting with 6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a not found: ID does not exist" containerID="6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.403767 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a"} err="failed to get container status \"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a\": rpc error: code = NotFound desc = could not find container \"6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a\": container with ID starting with 6717274e9ab211ee345eaf6be66c68e51c33662f27a8a99c3512cb5cb2146d9a not found: ID does not exist" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.415858 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.419557 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-46qfr"] Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.538236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dzkl6" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.640713 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-2jc5d" Jan 06 14:13:03 crc kubenswrapper[4869]: I0106 14:13:03.715914 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8af8b76-fdfa-485e-861d-ca2339418945" path="/var/lib/kubelet/pods/f8af8b76-fdfa-485e-861d-ca2339418945/volumes" Jan 06 14:13:04 crc kubenswrapper[4869]: I0106 14:13:04.151856 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-svd9m" Jan 06 14:13:12 crc kubenswrapper[4869]: I0106 14:13:12.445393 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:12 crc kubenswrapper[4869]: I0106 14:13:12.446096 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:12 crc kubenswrapper[4869]: I0106 14:13:12.497633 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:13 crc kubenswrapper[4869]: I0106 14:13:13.478548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6lgb5" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.013317 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz"] Jan 06 14:13:20 crc kubenswrapper[4869]: E0106 14:13:20.015224 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8af8b76-fdfa-485e-861d-ca2339418945" containerName="registry-server" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.015337 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8af8b76-fdfa-485e-861d-ca2339418945" containerName="registry-server" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.015545 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8af8b76-fdfa-485e-861d-ca2339418945" containerName="registry-server" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.017277 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.020177 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xqj8z" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.022135 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz"] Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.073641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.073838 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.073880 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6jl\" (UniqueName: \"kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.175658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.176186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.176228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6jl\" (UniqueName: \"kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.176333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.176637 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.200803 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6jl\" (UniqueName: \"kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl\") pod \"da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.337067 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:20 crc kubenswrapper[4869]: I0106 14:13:20.556881 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz"] Jan 06 14:13:21 crc kubenswrapper[4869]: I0106 14:13:21.506723 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" event={"ID":"4bf904d2-df2b-4d07-b3ab-ed4881daeef4","Type":"ContainerDied","Data":"0e42e58fae07ffff18d0bc72808ff3b4c1316dc980097eab62721354b81bae15"} Jan 06 14:13:21 crc kubenswrapper[4869]: I0106 14:13:21.506518 4869 generic.go:334] "Generic (PLEG): container finished" podID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerID="0e42e58fae07ffff18d0bc72808ff3b4c1316dc980097eab62721354b81bae15" exitCode=0 Jan 06 14:13:21 crc kubenswrapper[4869]: I0106 14:13:21.507700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" event={"ID":"4bf904d2-df2b-4d07-b3ab-ed4881daeef4","Type":"ContainerStarted","Data":"ab5037bbb35b6b235fdd152c03df4d4198f3bdfaf573b66c1e9d8c76d367d6e0"} Jan 06 14:13:22 crc kubenswrapper[4869]: I0106 14:13:22.517803 4869 generic.go:334] "Generic (PLEG): container finished" podID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerID="e5e92bae082cfdade70879349cf488c68241b9dff5f785c75b6251d9b93a36cc" exitCode=0 Jan 06 14:13:22 crc kubenswrapper[4869]: I0106 14:13:22.517888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" event={"ID":"4bf904d2-df2b-4d07-b3ab-ed4881daeef4","Type":"ContainerDied","Data":"e5e92bae082cfdade70879349cf488c68241b9dff5f785c75b6251d9b93a36cc"} Jan 06 14:13:23 crc kubenswrapper[4869]: I0106 14:13:23.528482 4869 generic.go:334] "Generic (PLEG): container finished" podID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerID="5d82d1c3348c61ee73e8881177c39d29f771d4b411d6df166b71ff7cfcafe5e3" exitCode=0 Jan 06 14:13:23 crc kubenswrapper[4869]: I0106 14:13:23.528573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" event={"ID":"4bf904d2-df2b-4d07-b3ab-ed4881daeef4","Type":"ContainerDied","Data":"5d82d1c3348c61ee73e8881177c39d29f771d4b411d6df166b71ff7cfcafe5e3"} Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.826316 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.966941 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle\") pod \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.968038 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle" (OuterVolumeSpecName: "bundle") pod "4bf904d2-df2b-4d07-b3ab-ed4881daeef4" (UID: "4bf904d2-df2b-4d07-b3ab-ed4881daeef4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.968290 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util\") pod \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.968330 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6jl\" (UniqueName: \"kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl\") pod \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\" (UID: \"4bf904d2-df2b-4d07-b3ab-ed4881daeef4\") " Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.968890 4869 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.976387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl" (OuterVolumeSpecName: "kube-api-access-hc6jl") pod "4bf904d2-df2b-4d07-b3ab-ed4881daeef4" (UID: "4bf904d2-df2b-4d07-b3ab-ed4881daeef4"). InnerVolumeSpecName "kube-api-access-hc6jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:13:24 crc kubenswrapper[4869]: I0106 14:13:24.984887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util" (OuterVolumeSpecName: "util") pod "4bf904d2-df2b-4d07-b3ab-ed4881daeef4" (UID: "4bf904d2-df2b-4d07-b3ab-ed4881daeef4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:13:25 crc kubenswrapper[4869]: I0106 14:13:25.070811 4869 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-util\") on node \"crc\" DevicePath \"\"" Jan 06 14:13:25 crc kubenswrapper[4869]: I0106 14:13:25.071087 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6jl\" (UniqueName: \"kubernetes.io/projected/4bf904d2-df2b-4d07-b3ab-ed4881daeef4-kube-api-access-hc6jl\") on node \"crc\" DevicePath \"\"" Jan 06 14:13:25 crc kubenswrapper[4869]: I0106 14:13:25.550237 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" event={"ID":"4bf904d2-df2b-4d07-b3ab-ed4881daeef4","Type":"ContainerDied","Data":"ab5037bbb35b6b235fdd152c03df4d4198f3bdfaf573b66c1e9d8c76d367d6e0"} Jan 06 14:13:25 crc kubenswrapper[4869]: I0106 14:13:25.550337 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab5037bbb35b6b235fdd152c03df4d4198f3bdfaf573b66c1e9d8c76d367d6e0" Jan 06 14:13:25 crc kubenswrapper[4869]: I0106 14:13:25.550289 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.193128 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv"] Jan 06 14:13:32 crc kubenswrapper[4869]: E0106 14:13:32.194272 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="extract" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.194290 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="extract" Jan 06 14:13:32 crc kubenswrapper[4869]: E0106 14:13:32.194306 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="util" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.194314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="util" Jan 06 14:13:32 crc kubenswrapper[4869]: E0106 14:13:32.194324 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="pull" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.194330 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="pull" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.194437 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf904d2-df2b-4d07-b3ab-ed4881daeef4" containerName="extract" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.196539 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.199403 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-cg72z" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.226008 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv"] Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.277569 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swj4x\" (UniqueName: \"kubernetes.io/projected/32b4a497-f056-4c29-890a-bb5616a79adf-kube-api-access-swj4x\") pod \"openstack-operator-controller-operator-596cb89f89-kplkv\" (UID: \"32b4a497-f056-4c29-890a-bb5616a79adf\") " pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.379402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swj4x\" (UniqueName: \"kubernetes.io/projected/32b4a497-f056-4c29-890a-bb5616a79adf-kube-api-access-swj4x\") pod \"openstack-operator-controller-operator-596cb89f89-kplkv\" (UID: \"32b4a497-f056-4c29-890a-bb5616a79adf\") " pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.428942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swj4x\" (UniqueName: \"kubernetes.io/projected/32b4a497-f056-4c29-890a-bb5616a79adf-kube-api-access-swj4x\") pod \"openstack-operator-controller-operator-596cb89f89-kplkv\" (UID: \"32b4a497-f056-4c29-890a-bb5616a79adf\") " pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:32 crc kubenswrapper[4869]: I0106 14:13:32.523735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:33 crc kubenswrapper[4869]: I0106 14:13:33.024184 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv"] Jan 06 14:13:33 crc kubenswrapper[4869]: W0106 14:13:33.033523 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b4a497_f056_4c29_890a_bb5616a79adf.slice/crio-05a69e6fe2cabe5ce0960e88b4ccd585af23969fc3a2658da2a2aeda03ee522d WatchSource:0}: Error finding container 05a69e6fe2cabe5ce0960e88b4ccd585af23969fc3a2658da2a2aeda03ee522d: Status 404 returned error can't find the container with id 05a69e6fe2cabe5ce0960e88b4ccd585af23969fc3a2658da2a2aeda03ee522d Jan 06 14:13:33 crc kubenswrapper[4869]: I0106 14:13:33.604574 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" event={"ID":"32b4a497-f056-4c29-890a-bb5616a79adf","Type":"ContainerStarted","Data":"05a69e6fe2cabe5ce0960e88b4ccd585af23969fc3a2658da2a2aeda03ee522d"} Jan 06 14:13:38 crc kubenswrapper[4869]: I0106 14:13:38.652044 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" event={"ID":"32b4a497-f056-4c29-890a-bb5616a79adf","Type":"ContainerStarted","Data":"a69cd355a731e8491b44a5d1d0ca68bb8ec440178906c72ef06f27441621c7cd"} Jan 06 14:13:38 crc kubenswrapper[4869]: I0106 14:13:38.652483 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:13:38 crc kubenswrapper[4869]: I0106 14:13:38.684191 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" podStartSLOduration=1.6265903210000001 podStartE2EDuration="6.684169865s" podCreationTimestamp="2026-01-06 14:13:32 +0000 UTC" firstStartedPulling="2026-01-06 14:13:33.036344402 +0000 UTC m=+831.576032066" lastFinishedPulling="2026-01-06 14:13:38.093923946 +0000 UTC m=+836.633611610" observedRunningTime="2026-01-06 14:13:38.678591575 +0000 UTC m=+837.218279239" watchObservedRunningTime="2026-01-06 14:13:38.684169865 +0000 UTC m=+837.223857529" Jan 06 14:13:52 crc kubenswrapper[4869]: I0106 14:13:52.526943 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.324343 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.327020 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.331399 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zgrd9" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.340500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.348049 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.349209 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.360368 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wtmrn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.364079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.412172 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.418580 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.432315 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5c2f5" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.433359 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqxwr\" (UniqueName: \"kubernetes.io/projected/9fceb23f-1f65-40c7-b8e9-3de1097ecee2-kube-api-access-xqxwr\") pod \"barbican-operator-controller-manager-f6f74d6db-5tjdn\" (UID: \"9fceb23f-1f65-40c7-b8e9-3de1097ecee2\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.442245 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.485068 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.495241 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.500155 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-f26bw" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.504903 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.505906 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.514036 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rb749" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.532742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.535973 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drfwm\" (UniqueName: \"kubernetes.io/projected/4a2ad023-66f0-45bc-9bea-b64cca26c388-kube-api-access-drfwm\") pod \"designate-operator-controller-manager-66f8b87655-g7gcq\" (UID: \"4a2ad023-66f0-45bc-9bea-b64cca26c388\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.536027 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdb5\" (UniqueName: \"kubernetes.io/projected/6e523183-ec1a-481e-822e-67c457b448c0-kube-api-access-qkdb5\") pod \"cinder-operator-controller-manager-78979fc445-2qx58\" (UID: \"6e523183-ec1a-481e-822e-67c457b448c0\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.536086 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44vvq\" (UniqueName: \"kubernetes.io/projected/a9cad33b-8b9c-434b-9e28-f730ca0cba42-kube-api-access-44vvq\") pod \"glance-operator-controller-manager-7596f46b97-l75w2\" (UID: \"a9cad33b-8b9c-434b-9e28-f730ca0cba42\") " pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.536110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqxwr\" (UniqueName: \"kubernetes.io/projected/9fceb23f-1f65-40c7-b8e9-3de1097ecee2-kube-api-access-xqxwr\") pod \"barbican-operator-controller-manager-f6f74d6db-5tjdn\" (UID: \"9fceb23f-1f65-40c7-b8e9-3de1097ecee2\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.536130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h98st\" (UniqueName: \"kubernetes.io/projected/81a6ac18-5e57-4f17-a5b3-64b76e59f83b-kube-api-access-h98st\") pod \"heat-operator-controller-manager-658dd65b86-hcm2g\" (UID: \"81a6ac18-5e57-4f17-a5b3-64b76e59f83b\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.558738 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.609759 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.614220 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.630073 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4xnzh" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.656435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqxwr\" (UniqueName: \"kubernetes.io/projected/9fceb23f-1f65-40c7-b8e9-3de1097ecee2-kube-api-access-xqxwr\") pod \"barbican-operator-controller-manager-f6f74d6db-5tjdn\" (UID: \"9fceb23f-1f65-40c7-b8e9-3de1097ecee2\") " pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.657328 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.669438 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdb5\" (UniqueName: \"kubernetes.io/projected/6e523183-ec1a-481e-822e-67c457b448c0-kube-api-access-qkdb5\") pod \"cinder-operator-controller-manager-78979fc445-2qx58\" (UID: \"6e523183-ec1a-481e-822e-67c457b448c0\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.669600 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44vvq\" (UniqueName: \"kubernetes.io/projected/a9cad33b-8b9c-434b-9e28-f730ca0cba42-kube-api-access-44vvq\") pod \"glance-operator-controller-manager-7596f46b97-l75w2\" (UID: \"a9cad33b-8b9c-434b-9e28-f730ca0cba42\") " pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.669632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h98st\" (UniqueName: \"kubernetes.io/projected/81a6ac18-5e57-4f17-a5b3-64b76e59f83b-kube-api-access-h98st\") pod \"heat-operator-controller-manager-658dd65b86-hcm2g\" (UID: \"81a6ac18-5e57-4f17-a5b3-64b76e59f83b\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.669781 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drfwm\" (UniqueName: \"kubernetes.io/projected/4a2ad023-66f0-45bc-9bea-b64cca26c388-kube-api-access-drfwm\") pod \"designate-operator-controller-manager-66f8b87655-g7gcq\" (UID: \"4a2ad023-66f0-45bc-9bea-b64cca26c388\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.685996 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.691742 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.692879 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.706175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.706389 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-95g8x" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.739754 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.740957 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.743382 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44vvq\" (UniqueName: \"kubernetes.io/projected/a9cad33b-8b9c-434b-9e28-f730ca0cba42-kube-api-access-44vvq\") pod \"glance-operator-controller-manager-7596f46b97-l75w2\" (UID: \"a9cad33b-8b9c-434b-9e28-f730ca0cba42\") " pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.745835 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-89r98" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.747008 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drfwm\" (UniqueName: \"kubernetes.io/projected/4a2ad023-66f0-45bc-9bea-b64cca26c388-kube-api-access-drfwm\") pod \"designate-operator-controller-manager-66f8b87655-g7gcq\" (UID: \"4a2ad023-66f0-45bc-9bea-b64cca26c388\") " pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.751191 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.780679 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.782542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmkw4\" (UniqueName: \"kubernetes.io/projected/d04195cb-3a00-4785-860d-8bb9537f42b7-kube-api-access-zmkw4\") pod \"ironic-operator-controller-manager-f99f54bc8-g6xt2\" (UID: \"d04195cb-3a00-4785-860d-8bb9537f42b7\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.782578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.782712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4vp\" (UniqueName: \"kubernetes.io/projected/9b55eca9-5342-4826-b2fd-3fe94520e1f2-kube-api-access-hv4vp\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-npl5f\" (UID: \"9b55eca9-5342-4826-b2fd-3fe94520e1f2\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.782803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqknl\" (UniqueName: \"kubernetes.io/projected/b295076d-930c-4a2b-9ba5-3cee1623e268-kube-api-access-zqknl\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.797613 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h98st\" (UniqueName: \"kubernetes.io/projected/81a6ac18-5e57-4f17-a5b3-64b76e59f83b-kube-api-access-h98st\") pod \"heat-operator-controller-manager-658dd65b86-hcm2g\" (UID: \"81a6ac18-5e57-4f17-a5b3-64b76e59f83b\") " pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.799309 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdb5\" (UniqueName: \"kubernetes.io/projected/6e523183-ec1a-481e-822e-67c457b448c0-kube-api-access-qkdb5\") pod \"cinder-operator-controller-manager-78979fc445-2qx58\" (UID: \"6e523183-ec1a-481e-822e-67c457b448c0\") " pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.811595 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.830696 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.845129 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.848755 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.849944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.858348 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fjl8w" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.866379 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.867505 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.885856 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4vp\" (UniqueName: \"kubernetes.io/projected/9b55eca9-5342-4826-b2fd-3fe94520e1f2-kube-api-access-hv4vp\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-npl5f\" (UID: \"9b55eca9-5342-4826-b2fd-3fe94520e1f2\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.885971 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqknl\" (UniqueName: \"kubernetes.io/projected/b295076d-930c-4a2b-9ba5-3cee1623e268-kube-api-access-zqknl\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.886011 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmkw4\" (UniqueName: \"kubernetes.io/projected/d04195cb-3a00-4785-860d-8bb9537f42b7-kube-api-access-zmkw4\") pod \"ironic-operator-controller-manager-f99f54bc8-g6xt2\" (UID: \"d04195cb-3a00-4785-860d-8bb9537f42b7\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.886036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: E0106 14:14:20.886215 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:20 crc kubenswrapper[4869]: E0106 14:14:20.886279 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:21.386255583 +0000 UTC m=+879.925943247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.889108 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9lwmq" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.925043 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.957562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmkw4\" (UniqueName: \"kubernetes.io/projected/d04195cb-3a00-4785-860d-8bb9537f42b7-kube-api-access-zmkw4\") pod \"ironic-operator-controller-manager-f99f54bc8-g6xt2\" (UID: \"d04195cb-3a00-4785-860d-8bb9537f42b7\") " pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.958354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqknl\" (UniqueName: \"kubernetes.io/projected/b295076d-930c-4a2b-9ba5-3cee1623e268-kube-api-access-zqknl\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.958907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4vp\" (UniqueName: \"kubernetes.io/projected/9b55eca9-5342-4826-b2fd-3fe94520e1f2-kube-api-access-hv4vp\") pod \"horizon-operator-controller-manager-7f5ddd8d7b-npl5f\" (UID: \"9b55eca9-5342-4826-b2fd-3fe94520e1f2\") " pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.985759 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz"] Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.986163 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.992714 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzq66\" (UniqueName: \"kubernetes.io/projected/ea758643-2a27-40e6-8c7f-8b0020e0ad97-kube-api-access-wzq66\") pod \"manila-operator-controller-manager-598945d5b8-t4dkz\" (UID: \"ea758643-2a27-40e6-8c7f-8b0020e0ad97\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:20 crc kubenswrapper[4869]: I0106 14:14:20.993041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bsns\" (UniqueName: \"kubernetes.io/projected/4e8628c6-a97f-48ea-a91a-1ea5257c5e49-kube-api-access-6bsns\") pod \"keystone-operator-controller-manager-7c8fb65dbf-55rl9\" (UID: \"4e8628c6-a97f-48ea-a91a-1ea5257c5e49\") " pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.010939 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.044117 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.045317 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.065076 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.066136 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.067134 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2kwg2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.080133 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-hvqgx" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.080608 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.095509 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bsns\" (UniqueName: \"kubernetes.io/projected/4e8628c6-a97f-48ea-a91a-1ea5257c5e49-kube-api-access-6bsns\") pod \"keystone-operator-controller-manager-7c8fb65dbf-55rl9\" (UID: \"4e8628c6-a97f-48ea-a91a-1ea5257c5e49\") " pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.095592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzq66\" (UniqueName: \"kubernetes.io/projected/ea758643-2a27-40e6-8c7f-8b0020e0ad97-kube-api-access-wzq66\") pod \"manila-operator-controller-manager-598945d5b8-t4dkz\" (UID: \"ea758643-2a27-40e6-8c7f-8b0020e0ad97\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.129748 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.143128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bsns\" (UniqueName: \"kubernetes.io/projected/4e8628c6-a97f-48ea-a91a-1ea5257c5e49-kube-api-access-6bsns\") pod \"keystone-operator-controller-manager-7c8fb65dbf-55rl9\" (UID: \"4e8628c6-a97f-48ea-a91a-1ea5257c5e49\") " pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.144928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzq66\" (UniqueName: \"kubernetes.io/projected/ea758643-2a27-40e6-8c7f-8b0020e0ad97-kube-api-access-wzq66\") pod \"manila-operator-controller-manager-598945d5b8-t4dkz\" (UID: \"ea758643-2a27-40e6-8c7f-8b0020e0ad97\") " pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.174156 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.174596 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.175124 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.184126 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ffkhc" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.190854 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.201422 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjqk\" (UniqueName: \"kubernetes.io/projected/c634faec-64fc-4d2c-af70-94f85b6fcd59-kube-api-access-7tjqk\") pod \"mariadb-operator-controller-manager-7b88bfc995-pm7np\" (UID: \"c634faec-64fc-4d2c-af70-94f85b6fcd59\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.201473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jfbx\" (UniqueName: \"kubernetes.io/projected/995201cd-f7dd-40a5-8854-192f32239e25-kube-api-access-9jfbx\") pod \"neutron-operator-controller-manager-7cd87b778f-7lbqn\" (UID: \"995201cd-f7dd-40a5-8854-192f32239e25\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.210826 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.212518 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.223170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-842gk" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.227899 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.228985 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.231421 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.236556 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.236938 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mkkdk" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.239221 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.240169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.245839 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-k8jlh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.254149 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.292943 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.302996 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.303249 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjqk\" (UniqueName: \"kubernetes.io/projected/c634faec-64fc-4d2c-af70-94f85b6fcd59-kube-api-access-7tjqk\") pod \"mariadb-operator-controller-manager-7b88bfc995-pm7np\" (UID: \"c634faec-64fc-4d2c-af70-94f85b6fcd59\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.303309 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jfbx\" (UniqueName: \"kubernetes.io/projected/995201cd-f7dd-40a5-8854-192f32239e25-kube-api-access-9jfbx\") pod \"neutron-operator-controller-manager-7cd87b778f-7lbqn\" (UID: \"995201cd-f7dd-40a5-8854-192f32239e25\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.303353 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7v7x\" (UniqueName: \"kubernetes.io/projected/c0aac0d5-701b-4a75-9bd0-4c9530692565-kube-api-access-g7v7x\") pod \"nova-operator-controller-manager-5fbbf8b6cc-n78kg\" (UID: \"c0aac0d5-701b-4a75-9bd0-4c9530692565\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.319560 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.319705 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.341302 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6gjx8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.367423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jfbx\" (UniqueName: \"kubernetes.io/projected/995201cd-f7dd-40a5-8854-192f32239e25-kube-api-access-9jfbx\") pod \"neutron-operator-controller-manager-7cd87b778f-7lbqn\" (UID: \"995201cd-f7dd-40a5-8854-192f32239e25\") " pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.380183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjqk\" (UniqueName: \"kubernetes.io/projected/c634faec-64fc-4d2c-af70-94f85b6fcd59-kube-api-access-7tjqk\") pod \"mariadb-operator-controller-manager-7b88bfc995-pm7np\" (UID: \"c634faec-64fc-4d2c-af70-94f85b6fcd59\") " pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.383484 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.400862 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.403068 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.406713 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw7lz\" (UniqueName: \"kubernetes.io/projected/ee1dd5c3-5e85-416c-933a-07fb51ec12d8-kube-api-access-tw7lz\") pod \"ovn-operator-controller-manager-bf6d4f946-wl9w7\" (UID: \"ee1dd5c3-5e85-416c-933a-07fb51ec12d8\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.406966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72kn2\" (UniqueName: \"kubernetes.io/projected/def35933-1964-4328-a9b2-dc9f72d11bcf-kube-api-access-72kn2\") pod \"placement-operator-controller-manager-9b6f8f78c-p249l\" (UID: \"def35933-1964-4328-a9b2-dc9f72d11bcf\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.407048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2wh\" (UniqueName: \"kubernetes.io/projected/3f4a328b-302b-496b-af2b-abec609682a6-kube-api-access-fk2wh\") pod \"swift-operator-controller-manager-bb586bbf4-5ltk8\" (UID: \"3f4a328b-302b-496b-af2b-abec609682a6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.414044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.414088 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl77j\" (UniqueName: \"kubernetes.io/projected/e39be0e5-0e29-45cd-925b-6eafb2b385a9-kube-api-access-gl77j\") pod \"octavia-operator-controller-manager-68c649d9d-r4ck9\" (UID: \"e39be0e5-0e29-45cd-925b-6eafb2b385a9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.414122 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqvk\" (UniqueName: \"kubernetes.io/projected/da44c856-c228-45b1-947b-891308581bb6-kube-api-access-cwqvk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.414154 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7v7x\" (UniqueName: \"kubernetes.io/projected/c0aac0d5-701b-4a75-9bd0-4c9530692565-kube-api-access-g7v7x\") pod \"nova-operator-controller-manager-5fbbf8b6cc-n78kg\" (UID: \"c0aac0d5-701b-4a75-9bd0-4c9530692565\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.414282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.413102 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9ztgk" Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.414477 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.414939 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:22.414921878 +0000 UTC m=+880.954609542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.439070 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.446563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7v7x\" (UniqueName: \"kubernetes.io/projected/c0aac0d5-701b-4a75-9bd0-4c9530692565-kube-api-access-g7v7x\") pod \"nova-operator-controller-manager-5fbbf8b6cc-n78kg\" (UID: \"c0aac0d5-701b-4a75-9bd0-4c9530692565\") " pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.471886 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.504211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.512970 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518614 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw7lz\" (UniqueName: \"kubernetes.io/projected/ee1dd5c3-5e85-416c-933a-07fb51ec12d8-kube-api-access-tw7lz\") pod \"ovn-operator-controller-manager-bf6d4f946-wl9w7\" (UID: \"ee1dd5c3-5e85-416c-933a-07fb51ec12d8\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518674 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72kn2\" (UniqueName: \"kubernetes.io/projected/def35933-1964-4328-a9b2-dc9f72d11bcf-kube-api-access-72kn2\") pod \"placement-operator-controller-manager-9b6f8f78c-p249l\" (UID: \"def35933-1964-4328-a9b2-dc9f72d11bcf\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518722 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk2wh\" (UniqueName: \"kubernetes.io/projected/3f4a328b-302b-496b-af2b-abec609682a6-kube-api-access-fk2wh\") pod \"swift-operator-controller-manager-bb586bbf4-5ltk8\" (UID: \"3f4a328b-302b-496b-af2b-abec609682a6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518751 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518772 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl77j\" (UniqueName: \"kubernetes.io/projected/e39be0e5-0e29-45cd-925b-6eafb2b385a9-kube-api-access-gl77j\") pod \"octavia-operator-controller-manager-68c649d9d-r4ck9\" (UID: \"e39be0e5-0e29-45cd-925b-6eafb2b385a9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.518791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqvk\" (UniqueName: \"kubernetes.io/projected/da44c856-c228-45b1-947b-891308581bb6-kube-api-access-cwqvk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.524202 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.524264 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:22.024245711 +0000 UTC m=+880.563933375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.529520 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.540981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqvk\" (UniqueName: \"kubernetes.io/projected/da44c856-c228-45b1-947b-891308581bb6-kube-api-access-cwqvk\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.547960 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw7lz\" (UniqueName: \"kubernetes.io/projected/ee1dd5c3-5e85-416c-933a-07fb51ec12d8-kube-api-access-tw7lz\") pod \"ovn-operator-controller-manager-bf6d4f946-wl9w7\" (UID: \"ee1dd5c3-5e85-416c-933a-07fb51ec12d8\") " pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.552678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk2wh\" (UniqueName: \"kubernetes.io/projected/3f4a328b-302b-496b-af2b-abec609682a6-kube-api-access-fk2wh\") pod \"swift-operator-controller-manager-bb586bbf4-5ltk8\" (UID: \"3f4a328b-302b-496b-af2b-abec609682a6\") " pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.555635 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.562458 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.569207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl77j\" (UniqueName: \"kubernetes.io/projected/e39be0e5-0e29-45cd-925b-6eafb2b385a9-kube-api-access-gl77j\") pod \"octavia-operator-controller-manager-68c649d9d-r4ck9\" (UID: \"e39be0e5-0e29-45cd-925b-6eafb2b385a9\") " pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.569698 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72kn2\" (UniqueName: \"kubernetes.io/projected/def35933-1964-4328-a9b2-dc9f72d11bcf-kube-api-access-72kn2\") pod \"placement-operator-controller-manager-9b6f8f78c-p249l\" (UID: \"def35933-1964-4328-a9b2-dc9f72d11bcf\") " pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.569840 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9pz6v" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.581170 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.599257 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.600511 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.607003 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-x4jgh" Jan 06 14:14:21 crc kubenswrapper[4869]: W0106 14:14:21.620644 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fceb23f_1f65_40c7_b8e9_3de1097ecee2.slice/crio-35cf08ea1be7f1dbc85c4666f3050e332e35b9a6d253b6a22152bdc164f99423 WatchSource:0}: Error finding container 35cf08ea1be7f1dbc85c4666f3050e332e35b9a6d253b6a22152bdc164f99423: Status 404 returned error can't find the container with id 35cf08ea1be7f1dbc85c4666f3050e332e35b9a6d253b6a22152bdc164f99423 Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.620734 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25j57\" (UniqueName: \"kubernetes.io/projected/5427b0d1-29a3-47c0-9a1a-a945063ae129-kube-api-access-25j57\") pod \"telemetry-operator-controller-manager-68d988df55-d2jnv\" (UID: \"5427b0d1-29a3-47c0-9a1a-a945063ae129\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.620786 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkn42\" (UniqueName: \"kubernetes.io/projected/7a343a23-f2df-474c-842c-f999f7d0e9b4-kube-api-access-tkn42\") pod \"test-operator-controller-manager-6c866cfdcb-45sp6\" (UID: \"7a343a23-f2df-474c-842c-f999f7d0e9b4\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.626848 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.636168 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.637385 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.641527 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rc5hk" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.642131 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.665211 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.683940 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.721736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hw8v\" (UniqueName: \"kubernetes.io/projected/2ad69939-a56e-4589-bf4b-68fb8d42d7eb-kube-api-access-4hw8v\") pod \"watcher-operator-controller-manager-9dbdf6486-csthh\" (UID: \"2ad69939-a56e-4589-bf4b-68fb8d42d7eb\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.721796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25j57\" (UniqueName: \"kubernetes.io/projected/5427b0d1-29a3-47c0-9a1a-a945063ae129-kube-api-access-25j57\") pod \"telemetry-operator-controller-manager-68d988df55-d2jnv\" (UID: \"5427b0d1-29a3-47c0-9a1a-a945063ae129\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.721836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkn42\" (UniqueName: \"kubernetes.io/projected/7a343a23-f2df-474c-842c-f999f7d0e9b4-kube-api-access-tkn42\") pod \"test-operator-controller-manager-6c866cfdcb-45sp6\" (UID: \"7a343a23-f2df-474c-842c-f999f7d0e9b4\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.759940 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.761118 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.763059 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.764132 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25j57\" (UniqueName: \"kubernetes.io/projected/5427b0d1-29a3-47c0-9a1a-a945063ae129-kube-api-access-25j57\") pod \"telemetry-operator-controller-manager-68d988df55-d2jnv\" (UID: \"5427b0d1-29a3-47c0-9a1a-a945063ae129\") " pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.764162 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-p28br" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.765058 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.777452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkn42\" (UniqueName: \"kubernetes.io/projected/7a343a23-f2df-474c-842c-f999f7d0e9b4-kube-api-access-tkn42\") pod \"test-operator-controller-manager-6c866cfdcb-45sp6\" (UID: \"7a343a23-f2df-474c-842c-f999f7d0e9b4\") " pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.782068 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.810402 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.812622 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.813697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.815739 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lgd5n" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.820427 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.823136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.823182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdvn\" (UniqueName: \"kubernetes.io/projected/24ca9405-001a-4beb-a0fa-0f3775dab087-kube-api-access-bgdvn\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.825863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hw8v\" (UniqueName: \"kubernetes.io/projected/2ad69939-a56e-4589-bf4b-68fb8d42d7eb-kube-api-access-4hw8v\") pod \"watcher-operator-controller-manager-9dbdf6486-csthh\" (UID: \"2ad69939-a56e-4589-bf4b-68fb8d42d7eb\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.825934 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.830494 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.846194 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hw8v\" (UniqueName: \"kubernetes.io/projected/2ad69939-a56e-4589-bf4b-68fb8d42d7eb-kube-api-access-4hw8v\") pod \"watcher-operator-controller-manager-9dbdf6486-csthh\" (UID: \"2ad69939-a56e-4589-bf4b-68fb8d42d7eb\") " pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.853320 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.878461 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.884594 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.917472 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.928571 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m689g\" (UniqueName: \"kubernetes.io/projected/ff01227e-d9f4-4dd0-bc22-455a00294406-kube-api-access-m689g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jtv5n\" (UID: \"ff01227e-d9f4-4dd0-bc22-455a00294406\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.928716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.928754 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.928791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdvn\" (UniqueName: \"kubernetes.io/projected/24ca9405-001a-4beb-a0fa-0f3775dab087-kube-api-access-bgdvn\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.930613 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.930683 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:22.430651132 +0000 UTC m=+880.970338796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.930846 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: E0106 14:14:21.930873 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:22.430866278 +0000 UTC m=+880.970553942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "webhook-server-cert" not found Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.943500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g"] Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.962380 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdvn\" (UniqueName: \"kubernetes.io/projected/24ca9405-001a-4beb-a0fa-0f3775dab087-kube-api-access-bgdvn\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:21 crc kubenswrapper[4869]: I0106 14:14:21.967235 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.020420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.031857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.031921 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m689g\" (UniqueName: \"kubernetes.io/projected/ff01227e-d9f4-4dd0-bc22-455a00294406-kube-api-access-m689g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jtv5n\" (UID: \"ff01227e-d9f4-4dd0-bc22-455a00294406\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.032367 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.032411 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:23.032398783 +0000 UTC m=+881.572086447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.054507 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m689g\" (UniqueName: \"kubernetes.io/projected/ff01227e-d9f4-4dd0-bc22-455a00294406-kube-api-access-m689g\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jtv5n\" (UID: \"ff01227e-d9f4-4dd0-bc22-455a00294406\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.128566 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.129014 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" event={"ID":"81a6ac18-5e57-4f17-a5b3-64b76e59f83b","Type":"ContainerStarted","Data":"4f27ba9cc12caf7fe60b0489f30ca031ee1989ad5376ae65af0c1ce4043efced"} Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.134794 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.136930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" event={"ID":"9fceb23f-1f65-40c7-b8e9-3de1097ecee2","Type":"ContainerStarted","Data":"35cf08ea1be7f1dbc85c4666f3050e332e35b9a6d253b6a22152bdc164f99423"} Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.140219 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.141054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" event={"ID":"9b55eca9-5342-4826-b2fd-3fe94520e1f2","Type":"ContainerStarted","Data":"8c725544f5f6b4eb1c1bf47ab654973db8824ee59cc49f3cf064026f120911a0"} Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.158240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" event={"ID":"4a2ad023-66f0-45bc-9bea-b64cca26c388","Type":"ContainerStarted","Data":"88d7895f267f3c25d3439b67714ece5b1be97fab4a4483a39824145a67b2fd98"} Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.278519 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.327536 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.334203 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.340100 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.347885 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2"] Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.348020 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04195cb_3a00_4785_860d_8bb9537f42b7.slice/crio-4bf8c611a06d0ddae754fc54d5b08b7679032e93d767fc6ea7aaa4340d67b4fa WatchSource:0}: Error finding container 4bf8c611a06d0ddae754fc54d5b08b7679032e93d767fc6ea7aaa4340d67b4fa: Status 404 returned error can't find the container with id 4bf8c611a06d0ddae754fc54d5b08b7679032e93d767fc6ea7aaa4340d67b4fa Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.351091 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc634faec_64fc_4d2c_af70_94f85b6fcd59.slice/crio-d92f0eaaaba351453bb75bcc1f2409e1eeed81e3ccb0d861c74a4f671157318d WatchSource:0}: Error finding container d92f0eaaaba351453bb75bcc1f2409e1eeed81e3ccb0d861c74a4f671157318d: Status 404 returned error can't find the container with id d92f0eaaaba351453bb75bcc1f2409e1eeed81e3ccb0d861c74a4f671157318d Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.438562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.438638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.438716 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.438789 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.438855 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:24.438835374 +0000 UTC m=+882.978523038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.438877 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.438940 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.438990 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:23.438948058 +0000 UTC m=+881.978635722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.439017 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:23.439005509 +0000 UTC m=+881.978693273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "webhook-server-cert" not found Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.505552 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.513692 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg"] Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.520818 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee1dd5c3_5e85_416c_933a_07fb51ec12d8.slice/crio-3a59228ae3b53b122e1239da78d939e98d458818be62099c33b1d997ba91416f WatchSource:0}: Error finding container 3a59228ae3b53b122e1239da78d939e98d458818be62099c33b1d997ba91416f: Status 404 returned error can't find the container with id 3a59228ae3b53b122e1239da78d939e98d458818be62099c33b1d997ba91416f Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.574873 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.588892 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9"] Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.596115 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef35933_1964_4328_a9b2_dc9f72d11bcf.slice/crio-a0bdba0b9bd7b1261fcda7f1d4df681c1979ecd66c809d683feed8b1bd613d31 WatchSource:0}: Error finding container a0bdba0b9bd7b1261fcda7f1d4df681c1979ecd66c809d683feed8b1bd613d31: Status 404 returned error can't find the container with id a0bdba0b9bd7b1261fcda7f1d4df681c1979ecd66c809d683feed8b1bd613d31 Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.602302 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gl77j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-68c649d9d-r4ck9_openstack-operators(e39be0e5-0e29-45cd-925b-6eafb2b385a9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.603492 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.612910 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f4a328b_302b_496b_af2b_abec609682a6.slice/crio-fb69953048a159c46d8db200182a0b2cad770461ca09ea611f8f9dfd2e265854 WatchSource:0}: Error finding container fb69953048a159c46d8db200182a0b2cad770461ca09ea611f8f9dfd2e265854: Status 404 returned error can't find the container with id fb69953048a159c46d8db200182a0b2cad770461ca09ea611f8f9dfd2e265854 Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.614385 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8"] Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.618051 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk2wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-5ltk8_openstack-operators(3f4a328b-302b-496b-af2b-abec609682a6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.621951 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.795728 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.807824 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv"] Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.825607 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh"] Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.839048 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5427b0d1_29a3_47c0_9a1a_a945063ae129.slice/crio-49fe41432bb1f6a3722ee7adbfbb7db1d6dc0a1047fe5d3834bf642ac2d551d3 WatchSource:0}: Error finding container 49fe41432bb1f6a3722ee7adbfbb7db1d6dc0a1047fe5d3834bf642ac2d551d3: Status 404 returned error can't find the container with id 49fe41432bb1f6a3722ee7adbfbb7db1d6dc0a1047fe5d3834bf642ac2d551d3 Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.840532 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ad69939_a56e_4589_bf4b_68fb8d42d7eb.slice/crio-04ea5bef6238c348f3902d9c44c6acba6e9a2f3551c108f8db4837f95d26a8c6 WatchSource:0}: Error finding container 04ea5bef6238c348f3902d9c44c6acba6e9a2f3551c108f8db4837f95d26a8c6: Status 404 returned error can't find the container with id 04ea5bef6238c348f3902d9c44c6acba6e9a2f3551c108f8db4837f95d26a8c6 Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.844448 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a343a23_f2df_474c_842c_f999f7d0e9b4.slice/crio-bf4f94a61acacc8cbcddc91df2e203c114668107064d956e38381f4352ac2698 WatchSource:0}: Error finding container bf4f94a61acacc8cbcddc91df2e203c114668107064d956e38381f4352ac2698: Status 404 returned error can't find the container with id bf4f94a61acacc8cbcddc91df2e203c114668107064d956e38381f4352ac2698 Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.846300 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4hw8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-9dbdf6486-csthh_openstack-operators(2ad69939-a56e-4589-bf4b-68fb8d42d7eb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.848015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" podUID="2ad69939-a56e-4589-bf4b-68fb8d42d7eb" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.860105 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkn42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-6c866cfdcb-45sp6_openstack-operators(7a343a23-f2df-474c-842c-f999f7d0e9b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.861414 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" Jan 06 14:14:22 crc kubenswrapper[4869]: I0106 14:14:22.863431 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n"] Jan 06 14:14:22 crc kubenswrapper[4869]: W0106 14:14:22.881605 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff01227e_d9f4_4dd0_bc22_455a00294406.slice/crio-8d1bed39c306fac22bf84817d4106c622f2d6db780a28bed439c123b4069cbb0 WatchSource:0}: Error finding container 8d1bed39c306fac22bf84817d4106c622f2d6db780a28bed439c123b4069cbb0: Status 404 returned error can't find the container with id 8d1bed39c306fac22bf84817d4106c622f2d6db780a28bed439c123b4069cbb0 Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.903711 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m689g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jtv5n_openstack-operators(ff01227e-d9f4-4dd0-bc22-455a00294406): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 06 14:14:22 crc kubenswrapper[4869]: E0106 14:14:22.905251 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podUID="ff01227e-d9f4-4dd0-bc22-455a00294406" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.055274 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.055815 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.055881 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:25.055860889 +0000 UTC m=+883.595548563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.170011 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" event={"ID":"ff01227e-d9f4-4dd0-bc22-455a00294406","Type":"ContainerStarted","Data":"8d1bed39c306fac22bf84817d4106c622f2d6db780a28bed439c123b4069cbb0"} Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.171920 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podUID="ff01227e-d9f4-4dd0-bc22-455a00294406" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.172761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" event={"ID":"995201cd-f7dd-40a5-8854-192f32239e25","Type":"ContainerStarted","Data":"de5358d9d3a62c1bf9a92d67809094b9f436263bbbe3499db80c1ef443659230"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.174681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" event={"ID":"2ad69939-a56e-4589-bf4b-68fb8d42d7eb","Type":"ContainerStarted","Data":"04ea5bef6238c348f3902d9c44c6acba6e9a2f3551c108f8db4837f95d26a8c6"} Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.179956 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" podUID="2ad69939-a56e-4589-bf4b-68fb8d42d7eb" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.181197 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" event={"ID":"6e523183-ec1a-481e-822e-67c457b448c0","Type":"ContainerStarted","Data":"5b4ad67d238b6ffc498c75926afca61ef52cd50f7d46615a4dc5d923a67a96f1"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.183323 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" event={"ID":"7a343a23-f2df-474c-842c-f999f7d0e9b4","Type":"ContainerStarted","Data":"bf4f94a61acacc8cbcddc91df2e203c114668107064d956e38381f4352ac2698"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.185535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" event={"ID":"a9cad33b-8b9c-434b-9e28-f730ca0cba42","Type":"ContainerStarted","Data":"43c778ee2ea495f3dbd914912f64e2ab728beb83eccc25c89b5b466d051b5630"} Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.185831 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.188343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" event={"ID":"e39be0e5-0e29-45cd-925b-6eafb2b385a9","Type":"ContainerStarted","Data":"d61139e81f1591ba399e982110ba4ef4d199d92bffb8462a73ffbd14c0672a8d"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.193694 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" event={"ID":"def35933-1964-4328-a9b2-dc9f72d11bcf","Type":"ContainerStarted","Data":"a0bdba0b9bd7b1261fcda7f1d4df681c1979ecd66c809d683feed8b1bd613d31"} Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.193824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.199734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" event={"ID":"3f4a328b-302b-496b-af2b-abec609682a6","Type":"ContainerStarted","Data":"fb69953048a159c46d8db200182a0b2cad770461ca09ea611f8f9dfd2e265854"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.204328 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" event={"ID":"ea758643-2a27-40e6-8c7f-8b0020e0ad97","Type":"ContainerStarted","Data":"084f9a658b43b240923f139c2cc05179e75ab57503f948cc6a705f1b60555034"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.205683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" event={"ID":"d04195cb-3a00-4785-860d-8bb9537f42b7","Type":"ContainerStarted","Data":"4bf8c611a06d0ddae754fc54d5b08b7679032e93d767fc6ea7aaa4340d67b4fa"} Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.203352 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.207156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" event={"ID":"4e8628c6-a97f-48ea-a91a-1ea5257c5e49","Type":"ContainerStarted","Data":"93f86a1997a4b4a0ac2c48c4fce3634a5dcb9c5be8e905ac412da6a33b7f519d"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.243758 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" event={"ID":"c634faec-64fc-4d2c-af70-94f85b6fcd59","Type":"ContainerStarted","Data":"d92f0eaaaba351453bb75bcc1f2409e1eeed81e3ccb0d861c74a4f671157318d"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.277478 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" event={"ID":"5427b0d1-29a3-47c0-9a1a-a945063ae129","Type":"ContainerStarted","Data":"49fe41432bb1f6a3722ee7adbfbb7db1d6dc0a1047fe5d3834bf642ac2d551d3"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.294856 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" event={"ID":"c0aac0d5-701b-4a75-9bd0-4c9530692565","Type":"ContainerStarted","Data":"e49fed3c821c732bee17c588bf6346a162a7873a9cb611c94f43bfb4c28bb6d1"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.301092 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" event={"ID":"ee1dd5c3-5e85-416c-933a-07fb51ec12d8","Type":"ContainerStarted","Data":"3a59228ae3b53b122e1239da78d939e98d458818be62099c33b1d997ba91416f"} Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.471045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:23 crc kubenswrapper[4869]: I0106 14:14:23.471099 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.471233 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.471288 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:25.471270956 +0000 UTC m=+884.010958620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "webhook-server-cert" not found Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.471724 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:23 crc kubenswrapper[4869]: E0106 14:14:23.471750 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:25.471742297 +0000 UTC m=+884.011429951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.312699 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.312822 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:4e3d234c1398039c2593611f7b0fd2a6b284cafb1563e6737876a265b9af42b6\\\"\"" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.313010 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.313069 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podUID="ff01227e-d9f4-4dd0-bc22-455a00294406" Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.313109 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f0ece9a81e4be3dbc1ff752a951970380546d8c0dea910953f862c219444b97a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" podUID="2ad69939-a56e-4589-bf4b-68fb8d42d7eb" Jan 06 14:14:24 crc kubenswrapper[4869]: I0106 14:14:24.495851 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.496965 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:24 crc kubenswrapper[4869]: E0106 14:14:24.497028 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:28.497005986 +0000 UTC m=+887.036693720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: I0106 14:14:25.108018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.108418 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.108587 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:29.108560569 +0000 UTC m=+887.648248233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: I0106 14:14:25.513676 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:25 crc kubenswrapper[4869]: I0106 14:14:25.513720 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.513890 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.513986 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:29.513969706 +0000 UTC m=+888.053657370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.513981 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 06 14:14:25 crc kubenswrapper[4869]: E0106 14:14:25.514078 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:29.514051918 +0000 UTC m=+888.053739662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "webhook-server-cert" not found Jan 06 14:14:28 crc kubenswrapper[4869]: I0106 14:14:28.575316 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:28 crc kubenswrapper[4869]: E0106 14:14:28.575549 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:28 crc kubenswrapper[4869]: E0106 14:14:28.575845 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:36.575823757 +0000 UTC m=+895.115511421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: I0106 14:14:29.183286 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.183511 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.183620 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:37.183591858 +0000 UTC m=+895.723279542 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: I0106 14:14:29.589912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:29 crc kubenswrapper[4869]: I0106 14:14:29.589969 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.590067 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.590102 4869 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.590126 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:37.590110281 +0000 UTC m=+896.129797935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:29 crc kubenswrapper[4869]: E0106 14:14:29.590139 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:37.590133512 +0000 UTC m=+896.129821176 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "webhook-server-cert" not found Jan 06 14:14:33 crc kubenswrapper[4869]: I0106 14:14:33.622791 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:14:33 crc kubenswrapper[4869]: I0106 14:14:33.623366 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:14:34 crc kubenswrapper[4869]: E0106 14:14:34.315265 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Jan 06 14:14:34 crc kubenswrapper[4869]: E0106 14:14:34.315470 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drfwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66f8b87655-g7gcq_openstack-operators(4a2ad023-66f0-45bc-9bea-b64cca26c388): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:34 crc kubenswrapper[4869]: E0106 14:14:34.316971 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" podUID="4a2ad023-66f0-45bc-9bea-b64cca26c388" Jan 06 14:14:34 crc kubenswrapper[4869]: E0106 14:14:34.394640 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" podUID="4a2ad023-66f0-45bc-9bea-b64cca26c388" Jan 06 14:14:35 crc kubenswrapper[4869]: E0106 14:14:35.059436 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c" Jan 06 14:14:35 crc kubenswrapper[4869]: E0106 14:14:35.059619 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzq66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-598945d5b8-t4dkz_openstack-operators(ea758643-2a27-40e6-8c7f-8b0020e0ad97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:35 crc kubenswrapper[4869]: E0106 14:14:35.060792 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" podUID="ea758643-2a27-40e6-8c7f-8b0020e0ad97" Jan 06 14:14:35 crc kubenswrapper[4869]: E0106 14:14:35.398322 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:c846ab4a49272557884db6b976f979e6b9dce1aa73e5eb7872b4472f44602a1c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" podUID="ea758643-2a27-40e6-8c7f-8b0020e0ad97" Jan 06 14:14:36 crc kubenswrapper[4869]: I0106 14:14:36.605177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:36 crc kubenswrapper[4869]: E0106 14:14:36.605436 4869 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:36 crc kubenswrapper[4869]: E0106 14:14:36.605560 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert podName:b295076d-930c-4a2b-9ba5-3cee1623e268 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:52.605542806 +0000 UTC m=+911.145230470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert") pod "infra-operator-controller-manager-6d99759cf-t68w7" (UID: "b295076d-930c-4a2b-9ba5-3cee1623e268") : secret "infra-operator-webhook-server-cert" not found Jan 06 14:14:37 crc kubenswrapper[4869]: I0106 14:14:37.214096 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:37 crc kubenswrapper[4869]: E0106 14:14:37.214270 4869 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:37 crc kubenswrapper[4869]: E0106 14:14:37.214342 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert podName:da44c856-c228-45b1-947b-891308581bb6 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:53.214323391 +0000 UTC m=+911.754011055 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert") pod "openstack-baremetal-operator-controller-manager-78948ddfd7s8247" (UID: "da44c856-c228-45b1-947b-891308581bb6") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 06 14:14:37 crc kubenswrapper[4869]: I0106 14:14:37.620536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:37 crc kubenswrapper[4869]: I0106 14:14:37.620601 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:37 crc kubenswrapper[4869]: E0106 14:14:37.621569 4869 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 06 14:14:37 crc kubenswrapper[4869]: E0106 14:14:37.621770 4869 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs podName:24ca9405-001a-4beb-a0fa-0f3775dab087 nodeName:}" failed. No retries permitted until 2026-01-06 14:14:53.621748246 +0000 UTC m=+912.161435910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs") pod "openstack-operator-controller-manager-7d77f59d59-zfch2" (UID: "24ca9405-001a-4beb-a0fa-0f3775dab087") : secret "metrics-server-cert" not found Jan 06 14:14:37 crc kubenswrapper[4869]: I0106 14:14:37.631815 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-webhook-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:38 crc kubenswrapper[4869]: E0106 14:14:38.224126 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Jan 06 14:14:38 crc kubenswrapper[4869]: E0106 14:14:38.224585 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g7v7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5fbbf8b6cc-n78kg_openstack-operators(c0aac0d5-701b-4a75-9bd0-4c9530692565): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:38 crc kubenswrapper[4869]: E0106 14:14:38.226060 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" podUID="c0aac0d5-701b-4a75-9bd0-4c9530692565" Jan 06 14:14:38 crc kubenswrapper[4869]: E0106 14:14:38.415999 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" podUID="c0aac0d5-701b-4a75-9bd0-4c9530692565" Jan 06 14:14:39 crc kubenswrapper[4869]: E0106 14:14:39.115761 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/keystone-operator:3e1a3e022c81bb2a980288631d1e0c695f49855c" Jan 06 14:14:39 crc kubenswrapper[4869]: E0106 14:14:39.115808 4869 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.69:5001/openstack-k8s-operators/keystone-operator:3e1a3e022c81bb2a980288631d1e0c695f49855c" Jan 06 14:14:39 crc kubenswrapper[4869]: E0106 14:14:39.115965 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.69:5001/openstack-k8s-operators/keystone-operator:3e1a3e022c81bb2a980288631d1e0c695f49855c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6bsns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c8fb65dbf-55rl9_openstack-operators(4e8628c6-a97f-48ea-a91a-1ea5257c5e49): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:39 crc kubenswrapper[4869]: E0106 14:14:39.117824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" podUID="4e8628c6-a97f-48ea-a91a-1ea5257c5e49" Jan 06 14:14:39 crc kubenswrapper[4869]: E0106 14:14:39.432033 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.69:5001/openstack-k8s-operators/keystone-operator:3e1a3e022c81bb2a980288631d1e0c695f49855c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" podUID="4e8628c6-a97f-48ea-a91a-1ea5257c5e49" Jan 06 14:14:40 crc kubenswrapper[4869]: I0106 14:14:40.431258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" event={"ID":"c634faec-64fc-4d2c-af70-94f85b6fcd59","Type":"ContainerStarted","Data":"06aa3378a0519e48931ccfb5f5c95f789cf6fc251e98fa7e4d2eb15a996b94af"} Jan 06 14:14:40 crc kubenswrapper[4869]: I0106 14:14:40.431841 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:40 crc kubenswrapper[4869]: I0106 14:14:40.454953 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" podStartSLOduration=3.730503882 podStartE2EDuration="20.454930089s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.363342376 +0000 UTC m=+880.903030040" lastFinishedPulling="2026-01-06 14:14:39.087768583 +0000 UTC m=+897.627456247" observedRunningTime="2026-01-06 14:14:40.448429502 +0000 UTC m=+898.988117176" watchObservedRunningTime="2026-01-06 14:14:40.454930089 +0000 UTC m=+898.994617743" Jan 06 14:14:50 crc kubenswrapper[4869]: E0106 14:14:50.501250 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7" Jan 06 14:14:50 crc kubenswrapper[4869]: E0106 14:14:50.501892 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk2wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-bb586bbf4-5ltk8_openstack-operators(3f4a328b-302b-496b-af2b-abec609682a6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:50 crc kubenswrapper[4869]: E0106 14:14:50.503014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" Jan 06 14:14:51 crc kubenswrapper[4869]: E0106 14:14:51.061733 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 06 14:14:51 crc kubenswrapper[4869]: E0106 14:14:51.061946 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m689g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jtv5n_openstack-operators(ff01227e-d9f4-4dd0-bc22-455a00294406): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:14:51 crc kubenswrapper[4869]: E0106 14:14:51.063160 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podUID="ff01227e-d9f4-4dd0-bc22-455a00294406" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.452295 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.533930 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" event={"ID":"81a6ac18-5e57-4f17-a5b3-64b76e59f83b","Type":"ContainerStarted","Data":"175c5b23ba4dbfcdff4243e7ee9bb10f0a5125b40dd0e22981c86bb049bf4066"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.534220 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.543311 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" event={"ID":"ee1dd5c3-5e85-416c-933a-07fb51ec12d8","Type":"ContainerStarted","Data":"daf5f32aaff8d77ac8accf864a4306f102ae320925b9bf416979b73d5a3c019d"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.544264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.548581 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" event={"ID":"9fceb23f-1f65-40c7-b8e9-3de1097ecee2","Type":"ContainerStarted","Data":"17b862099d24e1d8a052ddf4dfb6d65190c6361dc11de9fc6191530561a85ff0"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.548844 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.553851 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" event={"ID":"5427b0d1-29a3-47c0-9a1a-a945063ae129","Type":"ContainerStarted","Data":"86defbb06bbee1d7228bde07e3a9f11d482f3201f21cfe1f7ec248776706092d"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.554098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.562046 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" podStartSLOduration=14.460343587 podStartE2EDuration="31.562022363s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:21.992235236 +0000 UTC m=+880.531922900" lastFinishedPulling="2026-01-06 14:14:39.093914022 +0000 UTC m=+897.633601676" observedRunningTime="2026-01-06 14:14:51.55360729 +0000 UTC m=+910.093294974" watchObservedRunningTime="2026-01-06 14:14:51.562022363 +0000 UTC m=+910.101710037" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.562975 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" event={"ID":"d04195cb-3a00-4785-860d-8bb9537f42b7","Type":"ContainerStarted","Data":"1407457f7e705c4e431efd5a48d257de0ce1a75c53ea6946c24675cf4a98f998"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.563812 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.583903 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" event={"ID":"9b55eca9-5342-4826-b2fd-3fe94520e1f2","Type":"ContainerStarted","Data":"4c29b67376d2afa3d18c49c426a86a353d188749f80f6233bd4e8aef8ffe932c"} Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.584646 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.586974 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" podStartSLOduration=14.019174269 podStartE2EDuration="30.586945183s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.521870555 +0000 UTC m=+881.061558219" lastFinishedPulling="2026-01-06 14:14:39.089641469 +0000 UTC m=+897.629329133" observedRunningTime="2026-01-06 14:14:51.57932578 +0000 UTC m=+910.119013454" watchObservedRunningTime="2026-01-06 14:14:51.586945183 +0000 UTC m=+910.126632847" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.628776 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" podStartSLOduration=14.159795495000001 podStartE2EDuration="31.62875219s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:21.624826574 +0000 UTC m=+880.164514238" lastFinishedPulling="2026-01-06 14:14:39.093783269 +0000 UTC m=+897.633470933" observedRunningTime="2026-01-06 14:14:51.614142318 +0000 UTC m=+910.153829972" watchObservedRunningTime="2026-01-06 14:14:51.62875219 +0000 UTC m=+910.168439854" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.641853 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" podStartSLOduration=14.91204715 podStartE2EDuration="31.641832976s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.363254894 +0000 UTC m=+880.902942558" lastFinishedPulling="2026-01-06 14:14:39.09304072 +0000 UTC m=+897.632728384" observedRunningTime="2026-01-06 14:14:51.641787975 +0000 UTC m=+910.181475639" watchObservedRunningTime="2026-01-06 14:14:51.641832976 +0000 UTC m=+910.181520640" Jan 06 14:14:51 crc kubenswrapper[4869]: I0106 14:14:51.727999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" podStartSLOduration=14.483850622 podStartE2EDuration="30.72797651s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.844747992 +0000 UTC m=+881.384435656" lastFinishedPulling="2026-01-06 14:14:39.08887388 +0000 UTC m=+897.628561544" observedRunningTime="2026-01-06 14:14:51.706065513 +0000 UTC m=+910.245753197" watchObservedRunningTime="2026-01-06 14:14:51.72797651 +0000 UTC m=+910.267664174" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.596118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" event={"ID":"6e523183-ec1a-481e-822e-67c457b448c0","Type":"ContainerStarted","Data":"0ec9ebfd27586d3ab0878f2a7aba831cd2653a1a18fa56f75efd1f6aaa871881"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.596273 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.603000 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" event={"ID":"a9cad33b-8b9c-434b-9e28-f730ca0cba42","Type":"ContainerStarted","Data":"5b62c5f7f7c95e431d459e853b2901c2c02918da47fcb5d01a0313d3fc9a1065"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.603093 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.604927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" event={"ID":"7a343a23-f2df-474c-842c-f999f7d0e9b4","Type":"ContainerStarted","Data":"96741600da11a25be327053e9cc07a4e5c4c6b7ae3ec4201dc556cc3152339ab"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.605155 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.608145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" event={"ID":"e39be0e5-0e29-45cd-925b-6eafb2b385a9","Type":"ContainerStarted","Data":"3b40c96ceb25260245edaa8f4b527cb38c9d12d6147a5203757d76f8b77b13d5"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.608582 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.609732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.632849 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b295076d-930c-4a2b-9ba5-3cee1623e268-cert\") pod \"infra-operator-controller-manager-6d99759cf-t68w7\" (UID: \"b295076d-930c-4a2b-9ba5-3cee1623e268\") " pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.633571 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" event={"ID":"def35933-1964-4328-a9b2-dc9f72d11bcf","Type":"ContainerStarted","Data":"f1a51149b3e31b8a4eef70504cfb37d3e6400af3cbe102c543e75ccdbced57f7"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.633694 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.633980 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" podStartSLOduration=15.701100806 podStartE2EDuration="32.633964276s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.159552296 +0000 UTC m=+880.699239960" lastFinishedPulling="2026-01-06 14:14:39.092415766 +0000 UTC m=+897.632103430" observedRunningTime="2026-01-06 14:14:52.632354437 +0000 UTC m=+911.172042101" watchObservedRunningTime="2026-01-06 14:14:52.633964276 +0000 UTC m=+911.173651940" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.635505 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" podStartSLOduration=15.546911042 podStartE2EDuration="32.635501403s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.004263065 +0000 UTC m=+880.543950729" lastFinishedPulling="2026-01-06 14:14:39.092853426 +0000 UTC m=+897.632541090" observedRunningTime="2026-01-06 14:14:51.74827889 +0000 UTC m=+910.287966554" watchObservedRunningTime="2026-01-06 14:14:52.635501403 +0000 UTC m=+911.175189067" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.638253 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" event={"ID":"995201cd-f7dd-40a5-8854-192f32239e25","Type":"ContainerStarted","Data":"02f3dcea7c0a0fe08b22873c8349da278225601cd270519697bf69bb4ff1fb69"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.638916 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.640093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" event={"ID":"4a2ad023-66f0-45bc-9bea-b64cca26c388","Type":"ContainerStarted","Data":"949155fa791eb5534c850d723dd4783c896fa6a3efcf9b39412570c56cd12ba1"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.640463 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.646717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" event={"ID":"ea758643-2a27-40e6-8c7f-8b0020e0ad97","Type":"ContainerStarted","Data":"b24be7dfa2611785b1f30549d689bd09271f8196d9e4035815689895d7b4bf4f"} Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.647483 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.685295 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" podStartSLOduration=15.752650748 podStartE2EDuration="32.685273952s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.158611463 +0000 UTC m=+880.698299127" lastFinishedPulling="2026-01-06 14:14:39.091234667 +0000 UTC m=+897.630922331" observedRunningTime="2026-01-06 14:14:52.681075061 +0000 UTC m=+911.220762725" watchObservedRunningTime="2026-01-06 14:14:52.685273952 +0000 UTC m=+911.224961616" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.728697 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podStartSLOduration=4.270715444 podStartE2EDuration="32.728679338s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.601161234 +0000 UTC m=+881.140848898" lastFinishedPulling="2026-01-06 14:14:51.059125138 +0000 UTC m=+909.598812792" observedRunningTime="2026-01-06 14:14:52.709336752 +0000 UTC m=+911.249024416" watchObservedRunningTime="2026-01-06 14:14:52.728679338 +0000 UTC m=+911.268367002" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.740182 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podStartSLOduration=3.531883246 podStartE2EDuration="31.740160394s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.859957239 +0000 UTC m=+881.399644903" lastFinishedPulling="2026-01-06 14:14:51.068234387 +0000 UTC m=+909.607922051" observedRunningTime="2026-01-06 14:14:52.738547116 +0000 UTC m=+911.278234790" watchObservedRunningTime="2026-01-06 14:14:52.740160394 +0000 UTC m=+911.279848058" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.796988 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" podStartSLOduration=3.458640582 podStartE2EDuration="32.796970003s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:21.910417685 +0000 UTC m=+880.450105349" lastFinishedPulling="2026-01-06 14:14:51.248747106 +0000 UTC m=+909.788434770" observedRunningTime="2026-01-06 14:14:52.795151529 +0000 UTC m=+911.334839193" watchObservedRunningTime="2026-01-06 14:14:52.796970003 +0000 UTC m=+911.336657667" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.846507 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" podStartSLOduration=16.113735467 podStartE2EDuration="32.846486036s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.361196474 +0000 UTC m=+880.900884138" lastFinishedPulling="2026-01-06 14:14:39.093947043 +0000 UTC m=+897.633634707" observedRunningTime="2026-01-06 14:14:52.844262582 +0000 UTC m=+911.383950246" watchObservedRunningTime="2026-01-06 14:14:52.846486036 +0000 UTC m=+911.386173700" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.886417 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-95g8x" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.894922 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.917938 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" podStartSLOduration=4.026172754 podStartE2EDuration="32.917915027s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.361198624 +0000 UTC m=+880.900886288" lastFinishedPulling="2026-01-06 14:14:51.252940897 +0000 UTC m=+909.792628561" observedRunningTime="2026-01-06 14:14:52.914024453 +0000 UTC m=+911.453712127" watchObservedRunningTime="2026-01-06 14:14:52.917915027 +0000 UTC m=+911.457602691" Jan 06 14:14:52 crc kubenswrapper[4869]: I0106 14:14:52.949092 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" podStartSLOduration=15.454523127 podStartE2EDuration="31.949067387s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.597133708 +0000 UTC m=+881.136821362" lastFinishedPulling="2026-01-06 14:14:39.091677948 +0000 UTC m=+897.631365622" observedRunningTime="2026-01-06 14:14:52.946053284 +0000 UTC m=+911.485740948" watchObservedRunningTime="2026-01-06 14:14:52.949067387 +0000 UTC m=+911.488755051" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.219052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.223230 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da44c856-c228-45b1-947b-891308581bb6-cert\") pod \"openstack-baremetal-operator-controller-manager-78948ddfd7s8247\" (UID: \"da44c856-c228-45b1-947b-891308581bb6\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.399684 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mkkdk" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.407587 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.624463 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.629717 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24ca9405-001a-4beb-a0fa-0f3775dab087-metrics-certs\") pod \"openstack-operator-controller-manager-7d77f59d59-zfch2\" (UID: \"24ca9405-001a-4beb-a0fa-0f3775dab087\") " pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.713709 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-p28br" Jan 06 14:14:53 crc kubenswrapper[4869]: I0106 14:14:53.719738 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.222251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2"] Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.364098 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247"] Jan 06 14:14:54 crc kubenswrapper[4869]: W0106 14:14:54.394955 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda44c856_c228_45b1_947b_891308581bb6.slice/crio-20326144d7e3e67d5e8a4231b78d1c54bcf1c7b6d5dff1e7087ce3da2120fbe6 WatchSource:0}: Error finding container 20326144d7e3e67d5e8a4231b78d1c54bcf1c7b6d5dff1e7087ce3da2120fbe6: Status 404 returned error can't find the container with id 20326144d7e3e67d5e8a4231b78d1c54bcf1c7b6d5dff1e7087ce3da2120fbe6 Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.510231 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7"] Jan 06 14:14:54 crc kubenswrapper[4869]: W0106 14:14:54.514875 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb295076d_930c_4a2b_9ba5_3cee1623e268.slice/crio-f458bd69e197ff39915b5e6ba4311c4304ac0dbf94bf14599bb95cf87d0dcf6c WatchSource:0}: Error finding container f458bd69e197ff39915b5e6ba4311c4304ac0dbf94bf14599bb95cf87d0dcf6c: Status 404 returned error can't find the container with id f458bd69e197ff39915b5e6ba4311c4304ac0dbf94bf14599bb95cf87d0dcf6c Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.661442 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" event={"ID":"2ad69939-a56e-4589-bf4b-68fb8d42d7eb","Type":"ContainerStarted","Data":"6ab3451d12a903658e16b8ee97a48f83819c8c9c2ed2d4cf06a497bd27fc912a"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.662001 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.663285 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" event={"ID":"4e8628c6-a97f-48ea-a91a-1ea5257c5e49","Type":"ContainerStarted","Data":"2094401fdc20524112e40969c5e5d8e0441073fe12e6722481d52421edfbb04f"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.663463 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.665062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" event={"ID":"c0aac0d5-701b-4a75-9bd0-4c9530692565","Type":"ContainerStarted","Data":"8af8f0fb352df65df58e4f81a8fd8feba04c0232c3c4768195c8469a7fe6250d"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.665258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.666824 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" event={"ID":"24ca9405-001a-4beb-a0fa-0f3775dab087","Type":"ContainerStarted","Data":"a9b92837966625496cfe8f64fb1439f22510adbce8ac6feedfbabb4814a48999"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.666850 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" event={"ID":"24ca9405-001a-4beb-a0fa-0f3775dab087","Type":"ContainerStarted","Data":"60cdd16c31149537bc17527df303dd9fc48d9a8a7e699ddec48603ee83316ea2"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.666943 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.668119 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" event={"ID":"b295076d-930c-4a2b-9ba5-3cee1623e268","Type":"ContainerStarted","Data":"f458bd69e197ff39915b5e6ba4311c4304ac0dbf94bf14599bb95cf87d0dcf6c"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.669272 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" event={"ID":"da44c856-c228-45b1-947b-891308581bb6","Type":"ContainerStarted","Data":"20326144d7e3e67d5e8a4231b78d1c54bcf1c7b6d5dff1e7087ce3da2120fbe6"} Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.683026 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" podStartSLOduration=2.581498011 podStartE2EDuration="33.683005419s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.846149666 +0000 UTC m=+881.385837330" lastFinishedPulling="2026-01-06 14:14:53.947657074 +0000 UTC m=+912.487344738" observedRunningTime="2026-01-06 14:14:54.678809437 +0000 UTC m=+913.218497121" watchObservedRunningTime="2026-01-06 14:14:54.683005419 +0000 UTC m=+913.222693073" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.743363 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" podStartSLOduration=2.952968791 podStartE2EDuration="34.743345282s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.158196184 +0000 UTC m=+880.697883848" lastFinishedPulling="2026-01-06 14:14:53.948572675 +0000 UTC m=+912.488260339" observedRunningTime="2026-01-06 14:14:54.73830792 +0000 UTC m=+913.277995584" watchObservedRunningTime="2026-01-06 14:14:54.743345282 +0000 UTC m=+913.283032936" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.743777 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" podStartSLOduration=33.743773132 podStartE2EDuration="33.743773132s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:14:54.721480995 +0000 UTC m=+913.261168659" watchObservedRunningTime="2026-01-06 14:14:54.743773132 +0000 UTC m=+913.283460796" Jan 06 14:14:54 crc kubenswrapper[4869]: I0106 14:14:54.755312 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" podStartSLOduration=3.332569064 podStartE2EDuration="34.755291649s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.524896388 +0000 UTC m=+881.064584062" lastFinishedPulling="2026-01-06 14:14:53.947618983 +0000 UTC m=+912.487306647" observedRunningTime="2026-01-06 14:14:54.751377006 +0000 UTC m=+913.291064670" watchObservedRunningTime="2026-01-06 14:14:54.755291649 +0000 UTC m=+913.294979313" Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.695095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" event={"ID":"da44c856-c228-45b1-947b-891308581bb6","Type":"ContainerStarted","Data":"2e6b86b22f56e9439e4c3065e3c39bb524c290940fde5c98911ccc693e4112af"} Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.695464 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.697804 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" event={"ID":"b295076d-930c-4a2b-9ba5-3cee1623e268","Type":"ContainerStarted","Data":"746773a4c23917dfd990a920cd30184a0ca2ebb5ab87510164829d449a686016"} Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.697936 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.727134 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" podStartSLOduration=34.80548667 podStartE2EDuration="37.727099582s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:54.400088583 +0000 UTC m=+912.939776247" lastFinishedPulling="2026-01-06 14:14:57.321701495 +0000 UTC m=+915.861389159" observedRunningTime="2026-01-06 14:14:57.723054774 +0000 UTC m=+916.262742468" watchObservedRunningTime="2026-01-06 14:14:57.727099582 +0000 UTC m=+916.266787246" Jan 06 14:14:57 crc kubenswrapper[4869]: I0106 14:14:57.745930 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" podStartSLOduration=34.935188974 podStartE2EDuration="37.745914145s" podCreationTimestamp="2026-01-06 14:14:20 +0000 UTC" firstStartedPulling="2026-01-06 14:14:54.51661713 +0000 UTC m=+913.056304794" lastFinishedPulling="2026-01-06 14:14:57.327342301 +0000 UTC m=+915.867029965" observedRunningTime="2026-01-06 14:14:57.743793454 +0000 UTC m=+916.283481118" watchObservedRunningTime="2026-01-06 14:14:57.745914145 +0000 UTC m=+916.285601809" Jan 06 14:15:00 crc kubenswrapper[4869]: I0106 14:15:00.690405 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:15:00 crc kubenswrapper[4869]: I0106 14:15:00.754109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:15:00 crc kubenswrapper[4869]: I0106 14:15:00.834385 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:15:00 crc kubenswrapper[4869]: I0106 14:15:00.848152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:15:00 crc kubenswrapper[4869]: I0106 14:15:00.989959 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.014323 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.178129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.233821 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.295496 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.475925 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.516329 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.645724 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.725340 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.858597 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.889349 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:15:01 crc kubenswrapper[4869]: I0106 14:15:01.921165 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:15:02 crc kubenswrapper[4869]: I0106 14:15:02.024413 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:15:02 crc kubenswrapper[4869]: I0106 14:15:02.901273 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:15:03 crc kubenswrapper[4869]: I0106 14:15:03.415177 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:15:03 crc kubenswrapper[4869]: I0106 14:15:03.622622 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:15:03 crc kubenswrapper[4869]: I0106 14:15:03.622704 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:15:03 crc kubenswrapper[4869]: E0106 14:15:03.706741 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podUID="ff01227e-d9f4-4dd0-bc22-455a00294406" Jan 06 14:15:03 crc kubenswrapper[4869]: I0106 14:15:03.727550 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:15:04 crc kubenswrapper[4869]: E0106 14:15:04.707890 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:df69e4193043476bc71d0e06ac8bc7bbd17f7b624d495aae6b7c5e5b40c9e1e7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" Jan 06 14:15:15 crc kubenswrapper[4869]: I0106 14:15:15.708307 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:15:16 crc kubenswrapper[4869]: I0106 14:15:16.852725 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" event={"ID":"ff01227e-d9f4-4dd0-bc22-455a00294406","Type":"ContainerStarted","Data":"88e7dbe5dcc41aaa2fd593d6a1e5a133e65a3754d5f207e84936f72261c16f12"} Jan 06 14:15:16 crc kubenswrapper[4869]: I0106 14:15:16.876046 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" podStartSLOduration=2.147094056 podStartE2EDuration="55.876026177s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.903522878 +0000 UTC m=+881.443210542" lastFinishedPulling="2026-01-06 14:15:16.632454999 +0000 UTC m=+935.172142663" observedRunningTime="2026-01-06 14:15:16.874882159 +0000 UTC m=+935.414569823" watchObservedRunningTime="2026-01-06 14:15:16.876026177 +0000 UTC m=+935.415713841" Jan 06 14:15:18 crc kubenswrapper[4869]: I0106 14:15:18.874981 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" event={"ID":"3f4a328b-302b-496b-af2b-abec609682a6","Type":"ContainerStarted","Data":"d35f27a17e46ec9e28b3de1a927d185e2d8b13c27d6a85662be2391aa7c6403e"} Jan 06 14:15:18 crc kubenswrapper[4869]: I0106 14:15:18.876247 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:15:18 crc kubenswrapper[4869]: I0106 14:15:18.902495 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podStartSLOduration=2.189273242 podStartE2EDuration="57.902468765s" podCreationTimestamp="2026-01-06 14:14:21 +0000 UTC" firstStartedPulling="2026-01-06 14:14:22.617931418 +0000 UTC m=+881.157619082" lastFinishedPulling="2026-01-06 14:15:18.331126941 +0000 UTC m=+936.870814605" observedRunningTime="2026-01-06 14:15:18.893783995 +0000 UTC m=+937.433471669" watchObservedRunningTime="2026-01-06 14:15:18.902468765 +0000 UTC m=+937.442156449" Jan 06 14:15:31 crc kubenswrapper[4869]: I0106 14:15:31.813475 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:15:33 crc kubenswrapper[4869]: I0106 14:15:33.622374 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:15:33 crc kubenswrapper[4869]: I0106 14:15:33.622842 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:15:33 crc kubenswrapper[4869]: I0106 14:15:33.622917 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:15:33 crc kubenswrapper[4869]: I0106 14:15:33.624133 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:15:33 crc kubenswrapper[4869]: I0106 14:15:33.624279 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed" gracePeriod=600 Jan 06 14:15:34 crc kubenswrapper[4869]: I0106 14:15:34.038302 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed" exitCode=0 Jan 06 14:15:34 crc kubenswrapper[4869]: I0106 14:15:34.038395 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed"} Jan 06 14:15:34 crc kubenswrapper[4869]: I0106 14:15:34.038871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e"} Jan 06 14:15:34 crc kubenswrapper[4869]: I0106 14:15:34.038907 4869 scope.go:117] "RemoveContainer" containerID="27602a36611783728a2b020431c5bc3185474cb58d70bd206f2784227d107aee" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.010892 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx"] Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.012436 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.015178 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.016170 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.026684 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx"] Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.099388 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvt4\" (UniqueName: \"kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.099479 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.099535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.201420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.201500 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvt4\" (UniqueName: \"kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.201547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.202430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.207005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.219068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvt4\" (UniqueName: \"kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4\") pod \"collect-profiles-29461815-dzrfx\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.331853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:36 crc kubenswrapper[4869]: I0106 14:15:36.731502 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx"] Jan 06 14:15:36 crc kubenswrapper[4869]: W0106 14:15:36.736570 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc202d00c_2db6_42a5_bf18_fb6297a6dd17.slice/crio-28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8 WatchSource:0}: Error finding container 28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8: Status 404 returned error can't find the container with id 28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8 Jan 06 14:15:37 crc kubenswrapper[4869]: I0106 14:15:37.064153 4869 generic.go:334] "Generic (PLEG): container finished" podID="c202d00c-2db6-42a5-bf18-fb6297a6dd17" containerID="c07a52c832bbed6ebaa9ffa80812486ebc8474dab2b80bff99fd352d2fd155d1" exitCode=0 Jan 06 14:15:37 crc kubenswrapper[4869]: I0106 14:15:37.064207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" event={"ID":"c202d00c-2db6-42a5-bf18-fb6297a6dd17","Type":"ContainerDied","Data":"c07a52c832bbed6ebaa9ffa80812486ebc8474dab2b80bff99fd352d2fd155d1"} Jan 06 14:15:37 crc kubenswrapper[4869]: I0106 14:15:37.064236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" event={"ID":"c202d00c-2db6-42a5-bf18-fb6297a6dd17","Type":"ContainerStarted","Data":"28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8"} Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.334030 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.432133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume\") pod \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.432307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvt4\" (UniqueName: \"kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4\") pod \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.432495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume\") pod \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\" (UID: \"c202d00c-2db6-42a5-bf18-fb6297a6dd17\") " Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.432776 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume" (OuterVolumeSpecName: "config-volume") pod "c202d00c-2db6-42a5-bf18-fb6297a6dd17" (UID: "c202d00c-2db6-42a5-bf18-fb6297a6dd17"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.432951 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d00c-2db6-42a5-bf18-fb6297a6dd17-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.437348 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4" (OuterVolumeSpecName: "kube-api-access-8zvt4") pod "c202d00c-2db6-42a5-bf18-fb6297a6dd17" (UID: "c202d00c-2db6-42a5-bf18-fb6297a6dd17"). InnerVolumeSpecName "kube-api-access-8zvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.437612 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c202d00c-2db6-42a5-bf18-fb6297a6dd17" (UID: "c202d00c-2db6-42a5-bf18-fb6297a6dd17"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.534295 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvt4\" (UniqueName: \"kubernetes.io/projected/c202d00c-2db6-42a5-bf18-fb6297a6dd17-kube-api-access-8zvt4\") on node \"crc\" DevicePath \"\"" Jan 06 14:15:38 crc kubenswrapper[4869]: I0106 14:15:38.534331 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c202d00c-2db6-42a5-bf18-fb6297a6dd17-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:15:39 crc kubenswrapper[4869]: I0106 14:15:39.089972 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" event={"ID":"c202d00c-2db6-42a5-bf18-fb6297a6dd17","Type":"ContainerDied","Data":"28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8"} Jan 06 14:15:39 crc kubenswrapper[4869]: I0106 14:15:39.090044 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28e46281fca80dac23822d4433049ae54029d9154124a50d683ecf5ce10031c8" Jan 06 14:15:39 crc kubenswrapper[4869]: I0106 14:15:39.090068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.470022 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:15:49 crc kubenswrapper[4869]: E0106 14:15:49.471302 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c202d00c-2db6-42a5-bf18-fb6297a6dd17" containerName="collect-profiles" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.471323 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c202d00c-2db6-42a5-bf18-fb6297a6dd17" containerName="collect-profiles" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.471521 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c202d00c-2db6-42a5-bf18-fb6297a6dd17" containerName="collect-profiles" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.472457 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.477782 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.477988 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.479332 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.484729 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-78sn8" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.490378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.559020 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.563303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.567148 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.589987 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.603395 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.603445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.603482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.603533 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwnjb\" (UniqueName: \"kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.603582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rnvz\" (UniqueName: \"kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.705761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.705881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.705932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwnjb\" (UniqueName: \"kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.706009 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rnvz\" (UniqueName: \"kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.706048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.706988 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.707355 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.707523 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.739430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwnjb\" (UniqueName: \"kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb\") pod \"dnsmasq-dns-675f4bcbfc-fqmrn\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.741435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rnvz\" (UniqueName: \"kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz\") pod \"dnsmasq-dns-78dd6ddcc-5fwv7\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.792185 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:15:49 crc kubenswrapper[4869]: I0106 14:15:49.894427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:15:50 crc kubenswrapper[4869]: I0106 14:15:50.297278 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:15:50 crc kubenswrapper[4869]: I0106 14:15:50.415476 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:15:50 crc kubenswrapper[4869]: W0106 14:15:50.418356 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd20d92_cab6_4bdc_b9d6_8eac6e189f3a.slice/crio-3eb1ce1472f97a80de0dd73e5267dac36cf0e9b2e9ad78630a111c12baaaccab WatchSource:0}: Error finding container 3eb1ce1472f97a80de0dd73e5267dac36cf0e9b2e9ad78630a111c12baaaccab: Status 404 returned error can't find the container with id 3eb1ce1472f97a80de0dd73e5267dac36cf0e9b2e9ad78630a111c12baaaccab Jan 06 14:15:51 crc kubenswrapper[4869]: I0106 14:15:51.181474 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" event={"ID":"94d42900-dde5-4d20-973e-ba27d6cf4650","Type":"ContainerStarted","Data":"6ac809b363f900bcd666a2db48a392df3c4d7370d35932790c302e2039449957"} Jan 06 14:15:51 crc kubenswrapper[4869]: I0106 14:15:51.183275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" event={"ID":"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a","Type":"ContainerStarted","Data":"3eb1ce1472f97a80de0dd73e5267dac36cf0e9b2e9ad78630a111c12baaaccab"} Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.187169 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.247276 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.259468 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.293821 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.464474 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.464527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.464578 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrjg\" (UniqueName: \"kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.576201 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrjg\" (UniqueName: \"kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.576360 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.576396 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.577845 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.581367 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.630551 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrjg\" (UniqueName: \"kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg\") pod \"dnsmasq-dns-666b6646f7-trwtt\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.677528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.715407 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.716780 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.733240 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.790577 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgsv\" (UniqueName: \"kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.790681 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.790735 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.894446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgsv\" (UniqueName: \"kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.894575 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.894624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.898746 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.910974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.911693 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:52 crc kubenswrapper[4869]: I0106 14:15:52.913565 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgsv\" (UniqueName: \"kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv\") pod \"dnsmasq-dns-57d769cc4f-cgzgv\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.060083 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.380051 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.388157 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.410204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.410405 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.410865 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.411065 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.411391 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.411557 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m222g" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.411626 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.415232 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509890 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509932 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509972 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.509988 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.510010 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.511649 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs97c\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.511736 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.511764 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.511781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.525003 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:15:53 crc kubenswrapper[4869]: W0106 14:15:53.546633 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2def269d_7d12_409c_9513_8d3bc8aeba7f.slice/crio-e3534a249aeb16c2f93c29c6000fdc92edeb7cef994665f2602577a175303a4f WatchSource:0}: Error finding container e3534a249aeb16c2f93c29c6000fdc92edeb7cef994665f2602577a175303a4f: Status 404 returned error can't find the container with id e3534a249aeb16c2f93c29c6000fdc92edeb7cef994665f2602577a175303a4f Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.613440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.618740 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.618942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.618978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.618996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619088 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619116 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619147 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619212 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs97c\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.619380 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.620642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.621527 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.621805 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.622067 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.625124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.626760 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.632555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.636141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.641708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs97c\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.663933 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.666494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.697599 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.749626 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.810143 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.811749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815017 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815233 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815346 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815455 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815604 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815772 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c7c5n" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.815884 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.828952 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924486 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924544 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5bb\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924598 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924808 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924903 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924930 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.924988 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:53 crc kubenswrapper[4869]: I0106 14:15:53.925004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027089 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027197 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5bb\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027264 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027303 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027341 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027437 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027491 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.027511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.031161 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.033798 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.033947 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.034093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.034144 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.037095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.039494 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.039910 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.045300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.045974 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.053639 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5bb\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.081304 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.143990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.228091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" event={"ID":"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94","Type":"ContainerStarted","Data":"8d54e79d87284a979ff9218ea3593ec2b05fa4ad9fee9e3bedf393ef0dd395ee"} Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.229907 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" event={"ID":"2def269d-7d12-409c-9513-8d3bc8aeba7f","Type":"ContainerStarted","Data":"e3534a249aeb16c2f93c29c6000fdc92edeb7cef994665f2602577a175303a4f"} Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.339098 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:15:54 crc kubenswrapper[4869]: W0106 14:15:54.355406 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda54155a0_94ff_4519_81e3_68a0bb1b62b6.slice/crio-41971692dd8b3b01daa8fe8f88e7596c1f6cdf0b79ec8c3218e1501587892016 WatchSource:0}: Error finding container 41971692dd8b3b01daa8fe8f88e7596c1f6cdf0b79ec8c3218e1501587892016: Status 404 returned error can't find the container with id 41971692dd8b3b01daa8fe8f88e7596c1f6cdf0b79ec8c3218e1501587892016 Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.674733 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:15:54 crc kubenswrapper[4869]: W0106 14:15:54.683044 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae2b9cdc_8940_4aeb_bea8_fac416d93eed.slice/crio-52f1f7c403694c3b1d3b4b841c0dc136c6ed81bcddb5602d091205665ebd6b20 WatchSource:0}: Error finding container 52f1f7c403694c3b1d3b4b841c0dc136c6ed81bcddb5602d091205665ebd6b20: Status 404 returned error can't find the container with id 52f1f7c403694c3b1d3b4b841c0dc136c6ed81bcddb5602d091205665ebd6b20 Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.984367 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.985556 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.989730 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4cp7j" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.993268 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.993360 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.993820 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 06 14:15:54 crc kubenswrapper[4869]: I0106 14:15:54.999292 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.002249 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.154643 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155134 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zb6q\" (UniqueName: \"kubernetes.io/projected/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kube-api-access-2zb6q\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kolla-config\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-default\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155287 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.155390 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.239760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerStarted","Data":"41971692dd8b3b01daa8fe8f88e7596c1f6cdf0b79ec8c3218e1501587892016"} Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.241132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerStarted","Data":"52f1f7c403694c3b1d3b4b841c0dc136c6ed81bcddb5602d091205665ebd6b20"} Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256418 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zb6q\" (UniqueName: \"kubernetes.io/projected/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kube-api-access-2zb6q\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kolla-config\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-default\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256535 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256578 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256592 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256640 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.256978 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.258425 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kolla-config\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.259092 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-default\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.259134 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.260464 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.268406 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.285816 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.295350 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.297894 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zb6q\" (UniqueName: \"kubernetes.io/projected/be48d5b3-d81d-4bb6-a7a6-7706d8208db8-kube-api-access-2zb6q\") pod \"openstack-galera-0\" (UID: \"be48d5b3-d81d-4bb6-a7a6-7706d8208db8\") " pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.319852 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 06 14:15:55 crc kubenswrapper[4869]: I0106 14:15:55.980883 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 06 14:15:56 crc kubenswrapper[4869]: W0106 14:15:56.055626 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe48d5b3_d81d_4bb6_a7a6_7706d8208db8.slice/crio-d9925f09d99e7dfca33b551779f8b61da729630a9c2e0f9708f5524664419106 WatchSource:0}: Error finding container d9925f09d99e7dfca33b551779f8b61da729630a9c2e0f9708f5524664419106: Status 404 returned error can't find the container with id d9925f09d99e7dfca33b551779f8b61da729630a9c2e0f9708f5524664419106 Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.256914 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be48d5b3-d81d-4bb6-a7a6-7706d8208db8","Type":"ContainerStarted","Data":"d9925f09d99e7dfca33b551779f8b61da729630a9c2e0f9708f5524664419106"} Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.566393 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.572124 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.579969 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.619084 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-g55ph" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.619522 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.619196 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.619240 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.723838 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.724428 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.724549 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.725803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxtrs\" (UniqueName: \"kubernetes.io/projected/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kube-api-access-fxtrs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.725875 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.726100 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.726142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.726182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxtrs\" (UniqueName: \"kubernetes.io/projected/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kube-api-access-fxtrs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827244 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827346 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827367 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.827430 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.829028 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.829186 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.829548 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.829889 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.829913 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.850634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.851107 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.851218 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.856852 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.872982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.873641 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bqvgx" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.878919 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.893711 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.916589 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxtrs\" (UniqueName: \"kubernetes.io/projected/b5ecad54-1487-4d25-9bd1-e6e486ba59d5-kube-api-access-fxtrs\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.917801 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b5ecad54-1487-4d25-9bd1-e6e486ba59d5\") " pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.931293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.931365 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-config-data\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.931442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wk5z\" (UniqueName: \"kubernetes.io/projected/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kube-api-access-6wk5z\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.931510 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.931579 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kolla-config\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:56 crc kubenswrapper[4869]: I0106 14:15:56.947405 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.032516 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-config-data\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.032577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wk5z\" (UniqueName: \"kubernetes.io/projected/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kube-api-access-6wk5z\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.032632 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.032658 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kolla-config\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.032749 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.033387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-config-data\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.034097 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kolla-config\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.036596 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.037167 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bf085e-32cc-4a29-9a2f-ea0b8045c193-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.055506 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wk5z\" (UniqueName: \"kubernetes.io/projected/19bf085e-32cc-4a29-9a2f-ea0b8045c193-kube-api-access-6wk5z\") pod \"memcached-0\" (UID: \"19bf085e-32cc-4a29-9a2f-ea0b8045c193\") " pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.282429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.528066 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 06 14:15:57 crc kubenswrapper[4869]: W0106 14:15:57.562138 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5ecad54_1487_4d25_9bd1_e6e486ba59d5.slice/crio-aef126f5261b9e00641f2190607699699a8104be1a2c221c7c1f0b21aad1208e WatchSource:0}: Error finding container aef126f5261b9e00641f2190607699699a8104be1a2c221c7c1f0b21aad1208e: Status 404 returned error can't find the container with id aef126f5261b9e00641f2190607699699a8104be1a2c221c7c1f0b21aad1208e Jan 06 14:15:57 crc kubenswrapper[4869]: I0106 14:15:57.830516 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 06 14:15:57 crc kubenswrapper[4869]: W0106 14:15:57.843583 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19bf085e_32cc_4a29_9a2f_ea0b8045c193.slice/crio-9d727e61666c64300e07b98ce537a7444fcafdb771bdcdf392ae08bf0f058126 WatchSource:0}: Error finding container 9d727e61666c64300e07b98ce537a7444fcafdb771bdcdf392ae08bf0f058126: Status 404 returned error can't find the container with id 9d727e61666c64300e07b98ce537a7444fcafdb771bdcdf392ae08bf0f058126 Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.348926 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"19bf085e-32cc-4a29-9a2f-ea0b8045c193","Type":"ContainerStarted","Data":"9d727e61666c64300e07b98ce537a7444fcafdb771bdcdf392ae08bf0f058126"} Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.351344 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5ecad54-1487-4d25-9bd1-e6e486ba59d5","Type":"ContainerStarted","Data":"aef126f5261b9e00641f2190607699699a8104be1a2c221c7c1f0b21aad1208e"} Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.413496 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.414659 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.421444 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nrqrm" Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.454954 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.492250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7n7s\" (UniqueName: \"kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s\") pod \"kube-state-metrics-0\" (UID: \"92078172-9112-49c9-91a9-d694a11411c1\") " pod="openstack/kube-state-metrics-0" Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.594798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7n7s\" (UniqueName: \"kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s\") pod \"kube-state-metrics-0\" (UID: \"92078172-9112-49c9-91a9-d694a11411c1\") " pod="openstack/kube-state-metrics-0" Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.614297 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7n7s\" (UniqueName: \"kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s\") pod \"kube-state-metrics-0\" (UID: \"92078172-9112-49c9-91a9-d694a11411c1\") " pod="openstack/kube-state-metrics-0" Jan 06 14:15:58 crc kubenswrapper[4869]: I0106 14:15:58.759392 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.837910 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.839882 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.859870 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.922339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.922400 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrprt\" (UniqueName: \"kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:15:59 crc kubenswrapper[4869]: I0106 14:15:59.922424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.024472 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.024562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrprt\" (UniqueName: \"kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.024586 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.025076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.025211 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.044726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrprt\" (UniqueName: \"kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt\") pod \"redhat-operators-m4j6q\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:00 crc kubenswrapper[4869]: I0106 14:16:00.168313 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.659067 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mmg7w"] Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.660520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.662956 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.663770 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-thqt2" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.667557 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.672135 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mmg7w"] Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.733687 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-64n65"] Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.735782 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.747574 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-64n65"] Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794598 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-log-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aaa27703-fd83-40d0-a8fb-8d6962212f8f-scripts\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794763 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794827 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-combined-ca-bundle\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8stp\" (UniqueName: \"kubernetes.io/projected/aaa27703-fd83-40d0-a8fb-8d6962212f8f-kube-api-access-d8stp\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.794873 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-ovn-controller-tls-certs\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896353 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgm5v\" (UniqueName: \"kubernetes.io/projected/15ab1556-2fd1-423a-9759-4c1088500a85-kube-api-access-qgm5v\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896528 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15ab1556-2fd1-423a-9759-4c1088500a85-scripts\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896641 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-log\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-combined-ca-bundle\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896736 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8stp\" (UniqueName: \"kubernetes.io/projected/aaa27703-fd83-40d0-a8fb-8d6962212f8f-kube-api-access-d8stp\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-ovn-controller-tls-certs\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896898 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-lib\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.896938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-log-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897080 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-run\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aaa27703-fd83-40d0-a8fb-8d6962212f8f-scripts\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-etc-ovs\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897259 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.897572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-log-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.898868 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/aaa27703-fd83-40d0-a8fb-8d6962212f8f-var-run-ovn\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.899712 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aaa27703-fd83-40d0-a8fb-8d6962212f8f-scripts\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.902358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-combined-ca-bundle\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.902411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/aaa27703-fd83-40d0-a8fb-8d6962212f8f-ovn-controller-tls-certs\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.920596 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8stp\" (UniqueName: \"kubernetes.io/projected/aaa27703-fd83-40d0-a8fb-8d6962212f8f-kube-api-access-d8stp\") pod \"ovn-controller-mmg7w\" (UID: \"aaa27703-fd83-40d0-a8fb-8d6962212f8f\") " pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:02 crc kubenswrapper[4869]: I0106 14:16:02.993215 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.003596 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-lib\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.003974 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-run\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.005192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-etc-ovs\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.005361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgm5v\" (UniqueName: \"kubernetes.io/projected/15ab1556-2fd1-423a-9759-4c1088500a85-kube-api-access-qgm5v\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.005542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15ab1556-2fd1-423a-9759-4c1088500a85-scripts\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.005726 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-log\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.006028 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-log\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.004266 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-lib\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.006391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-etc-ovs\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.004333 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/15ab1556-2fd1-423a-9759-4c1088500a85-var-run\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.009709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15ab1556-2fd1-423a-9759-4c1088500a85-scripts\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.053726 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgm5v\" (UniqueName: \"kubernetes.io/projected/15ab1556-2fd1-423a-9759-4c1088500a85-kube-api-access-qgm5v\") pod \"ovn-controller-ovs-64n65\" (UID: \"15ab1556-2fd1-423a-9759-4c1088500a85\") " pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.067310 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.215691 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.219209 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.223929 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.310516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.310570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.310645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxnj9\" (UniqueName: \"kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.413729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.413801 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.413914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxnj9\" (UniqueName: \"kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.414537 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.414892 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.432334 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxnj9\" (UniqueName: \"kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9\") pod \"community-operators-7x82b\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:03 crc kubenswrapper[4869]: I0106 14:16:03.582487 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.346363 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.347894 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.351514 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.351569 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.351726 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.352007 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.353592 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cg8kr" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.361493 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.446983 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgpx7\" (UniqueName: \"kubernetes.io/projected/7127872e-e183-49cf-a8e2-153197597bea-kube-api-access-fgpx7\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.447129 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-config\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.447171 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7127872e-e183-49cf-a8e2-153197597bea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548061 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548258 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548291 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgpx7\" (UniqueName: \"kubernetes.io/projected/7127872e-e183-49cf-a8e2-153197597bea-kube-api-access-fgpx7\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-config\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7127872e-e183-49cf-a8e2-153197597bea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.548574 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.549043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7127872e-e183-49cf-a8e2-153197597bea-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.549413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-config\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.550194 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7127872e-e183-49cf-a8e2-153197597bea-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.553136 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.554821 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.555952 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.557520 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.558693 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.558915 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.559080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-m78wt" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.563250 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.568349 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7127872e-e183-49cf-a8e2-153197597bea-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.573111 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgpx7\" (UniqueName: \"kubernetes.io/projected/7127872e-e183-49cf-a8e2-153197597bea-kube-api-access-fgpx7\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.576169 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.604899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7127872e-e183-49cf-a8e2-153197597bea\") " pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650215 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650337 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650369 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzdp2\" (UniqueName: \"kubernetes.io/projected/6a6edbf6-4b64-4319-b863-6e9e5f08746f-kube-api-access-nzdp2\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-config\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650442 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650477 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.650560 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.672030 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.751876 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzdp2\" (UniqueName: \"kubernetes.io/projected/6a6edbf6-4b64-4319-b863-6e9e5f08746f-kube-api-access-nzdp2\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.751937 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-config\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.751988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.752072 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.752149 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.753013 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.753041 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-config\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.753092 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.753235 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.753744 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.754102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a6edbf6-4b64-4319-b863-6e9e5f08746f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.754281 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.756476 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.758989 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.761282 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6edbf6-4b64-4319-b863-6e9e5f08746f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.770172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzdp2\" (UniqueName: \"kubernetes.io/projected/6a6edbf6-4b64-4319-b863-6e9e5f08746f-kube-api-access-nzdp2\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.774352 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6a6edbf6-4b64-4319-b863-6e9e5f08746f\") " pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:05 crc kubenswrapper[4869]: I0106 14:16:05.948792 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.401325 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.403171 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.412064 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.481530 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.481651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.481690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhs97\" (UniqueName: \"kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.583251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.583305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhs97\" (UniqueName: \"kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.583363 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.583864 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.584093 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.601797 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhs97\" (UniqueName: \"kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97\") pod \"redhat-marketplace-tw58w\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:07 crc kubenswrapper[4869]: I0106 14:16:07.722090 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.603185 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.608426 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.615843 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.670414 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xb2v\" (UniqueName: \"kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.670497 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.670531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.772365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.772431 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.772627 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xb2v\" (UniqueName: \"kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.774363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.774808 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.814852 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xb2v\" (UniqueName: \"kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v\") pod \"certified-operators-9dtd6\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:10 crc kubenswrapper[4869]: I0106 14:16:10.986510 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:18 crc kubenswrapper[4869]: E0106 14:16:18.503073 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 06 14:16:18 crc kubenswrapper[4869]: E0106 14:16:18.503810 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zb6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(be48d5b3-d81d-4bb6-a7a6-7706d8208db8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:18 crc kubenswrapper[4869]: E0106 14:16:18.505014 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="be48d5b3-d81d-4bb6-a7a6-7706d8208db8" Jan 06 14:16:18 crc kubenswrapper[4869]: E0106 14:16:18.518244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="be48d5b3-d81d-4bb6-a7a6-7706d8208db8" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.682600 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.683081 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rf5bb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(ae2b9cdc-8940-4aeb-bea8-fac416d93eed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.684305 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.694935 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.695107 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xs97c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a54155a0-94ff-4519-81e3-68a0bb1b62b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:19 crc kubenswrapper[4869]: E0106 14:16:19.696297 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" Jan 06 14:16:20 crc kubenswrapper[4869]: E0106 14:16:20.532062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" Jan 06 14:16:20 crc kubenswrapper[4869]: E0106 14:16:20.532339 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" Jan 06 14:16:21 crc kubenswrapper[4869]: E0106 14:16:21.812628 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 06 14:16:21 crc kubenswrapper[4869]: E0106 14:16:21.812859 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxtrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(b5ecad54-1487-4d25-9bd1-e6e486ba59d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:21 crc kubenswrapper[4869]: E0106 14:16:21.814032 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="b5ecad54-1487-4d25-9bd1-e6e486ba59d5" Jan 06 14:16:22 crc kubenswrapper[4869]: E0106 14:16:22.561528 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="b5ecad54-1487-4d25-9bd1-e6e486ba59d5" Jan 06 14:16:27 crc kubenswrapper[4869]: I0106 14:16:27.509045 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:27 crc kubenswrapper[4869]: W0106 14:16:27.902203 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod860379d2_780a_49cd_8748_020b09e2fe94.slice/crio-f2cc4cb0b02d0320e73c9293567cca8aac615561090af70efc189ce1b59ed7e0 WatchSource:0}: Error finding container f2cc4cb0b02d0320e73c9293567cca8aac615561090af70efc189ce1b59ed7e0: Status 404 returned error can't find the container with id f2cc4cb0b02d0320e73c9293567cca8aac615561090af70efc189ce1b59ed7e0 Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.913047 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.913222 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvrjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-trwtt_openstack(2def269d-7d12-409c-9513-8d3bc8aeba7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.914816 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.957094 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.957250 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbgsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-cgzgv_openstack(7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.958748 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.987147 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.987510 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rnvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5fwv7_openstack(1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.988656 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" podUID="1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.996843 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.997090 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwnjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-fqmrn_openstack(94d42900-dde5-4d20-973e-ba27d6cf4650): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:27 crc kubenswrapper[4869]: E0106 14:16:27.998824 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" podUID="94d42900-dde5-4d20-973e-ba27d6cf4650" Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.388066 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.617118 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.617387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerStarted","Data":"26f17b0daa15d8c29e8d7e410b590cb16105337d46fa084b3505f897f041f478"} Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.622355 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"19bf085e-32cc-4a29-9a2f-ea0b8045c193","Type":"ContainerStarted","Data":"9ecd37d52422dc5314312222d5813e5fb835eee0067d2b4eef93ff08ccba4dd6"} Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.623562 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 06 14:16:28 crc kubenswrapper[4869]: W0106 14:16:28.627860 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92078172_9112_49c9_91a9_d694a11411c1.slice/crio-a81daa19b5004666eadbb34c1bc68d3b8fc3a1b8cec315d56503abeff7f9cc4c WatchSource:0}: Error finding container a81daa19b5004666eadbb34c1bc68d3b8fc3a1b8cec315d56503abeff7f9cc4c: Status 404 returned error can't find the container with id a81daa19b5004666eadbb34c1bc68d3b8fc3a1b8cec315d56503abeff7f9cc4c Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.627993 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.628563 4869 generic.go:334] "Generic (PLEG): container finished" podID="860379d2-780a-49cd-8748-020b09e2fe94" containerID="8de89109f22b848894c7e7b85dfc9ee76aba665721f90cdaaba3a92ff48274a3" exitCode=0 Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.628734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerDied","Data":"8de89109f22b848894c7e7b85dfc9ee76aba665721f90cdaaba3a92ff48274a3"} Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.628768 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerStarted","Data":"f2cc4cb0b02d0320e73c9293567cca8aac615561090af70efc189ce1b59ed7e0"} Jan 06 14:16:28 crc kubenswrapper[4869]: E0106 14:16:28.631957 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" Jan 06 14:16:28 crc kubenswrapper[4869]: E0106 14:16:28.632087 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.648844 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.683336 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mmg7w"] Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.688604 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.636523779 podStartE2EDuration="32.688582752s" podCreationTimestamp="2026-01-06 14:15:56 +0000 UTC" firstStartedPulling="2026-01-06 14:15:57.853063625 +0000 UTC m=+976.392751289" lastFinishedPulling="2026-01-06 14:16:27.905122598 +0000 UTC m=+1006.444810262" observedRunningTime="2026-01-06 14:16:28.661794198 +0000 UTC m=+1007.201481862" watchObservedRunningTime="2026-01-06 14:16:28.688582752 +0000 UTC m=+1007.228270416" Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.697919 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 06 14:16:28 crc kubenswrapper[4869]: W0106 14:16:28.702948 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaa27703_fd83_40d0_a8fb_8d6962212f8f.slice/crio-90d0aeee5b7822f8219e46fbf183f3e056b9ac41123f26c13c11c1e36d26983d WatchSource:0}: Error finding container 90d0aeee5b7822f8219e46fbf183f3e056b9ac41123f26c13c11c1e36d26983d: Status 404 returned error can't find the container with id 90d0aeee5b7822f8219e46fbf183f3e056b9ac41123f26c13c11c1e36d26983d Jan 06 14:16:28 crc kubenswrapper[4869]: W0106 14:16:28.705285 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7127872e_e183_49cf_a8e2_153197597bea.slice/crio-798229fe2f6c2ad6f4db7878297b51f251a26dd713772a4cc5642d5819661c80 WatchSource:0}: Error finding container 798229fe2f6c2ad6f4db7878297b51f251a26dd713772a4cc5642d5819661c80: Status 404 returned error can't find the container with id 798229fe2f6c2ad6f4db7878297b51f251a26dd713772a4cc5642d5819661c80 Jan 06 14:16:28 crc kubenswrapper[4869]: I0106 14:16:28.799147 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.153464 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.158523 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.281323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rnvz\" (UniqueName: \"kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz\") pod \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.281393 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwnjb\" (UniqueName: \"kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb\") pod \"94d42900-dde5-4d20-973e-ba27d6cf4650\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.281436 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config\") pod \"94d42900-dde5-4d20-973e-ba27d6cf4650\" (UID: \"94d42900-dde5-4d20-973e-ba27d6cf4650\") " Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.281466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config\") pod \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.281502 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc\") pod \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\" (UID: \"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a\") " Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.282224 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a" (UID: "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.282249 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config" (OuterVolumeSpecName: "config") pod "94d42900-dde5-4d20-973e-ba27d6cf4650" (UID: "94d42900-dde5-4d20-973e-ba27d6cf4650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.282431 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config" (OuterVolumeSpecName: "config") pod "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a" (UID: "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.288239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz" (OuterVolumeSpecName: "kube-api-access-9rnvz") pod "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a" (UID: "1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a"). InnerVolumeSpecName "kube-api-access-9rnvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.293930 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb" (OuterVolumeSpecName: "kube-api-access-dwnjb") pod "94d42900-dde5-4d20-973e-ba27d6cf4650" (UID: "94d42900-dde5-4d20-973e-ba27d6cf4650"). InnerVolumeSpecName "kube-api-access-dwnjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.382928 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d42900-dde5-4d20-973e-ba27d6cf4650-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.383193 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.383203 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.383214 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rnvz\" (UniqueName: \"kubernetes.io/projected/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a-kube-api-access-9rnvz\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.383224 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwnjb\" (UniqueName: \"kubernetes.io/projected/94d42900-dde5-4d20-973e-ba27d6cf4650-kube-api-access-dwnjb\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.638303 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mmg7w" event={"ID":"aaa27703-fd83-40d0-a8fb-8d6962212f8f","Type":"ContainerStarted","Data":"90d0aeee5b7822f8219e46fbf183f3e056b9ac41123f26c13c11c1e36d26983d"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.640038 4869 generic.go:334] "Generic (PLEG): container finished" podID="738153bb-7223-4874-a1f5-c7f42c256671" containerID="a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8" exitCode=0 Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.640097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerDied","Data":"a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.641919 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.641965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-fqmrn" event={"ID":"94d42900-dde5-4d20-973e-ba27d6cf4650","Type":"ContainerDied","Data":"6ac809b363f900bcd666a2db48a392df3c4d7370d35932790c302e2039449957"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.643497 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" event={"ID":"1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a","Type":"ContainerDied","Data":"3eb1ce1472f97a80de0dd73e5267dac36cf0e9b2e9ad78630a111c12baaaccab"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.643520 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5fwv7" Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.645998 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92078172-9112-49c9-91a9-d694a11411c1","Type":"ContainerStarted","Data":"a81daa19b5004666eadbb34c1bc68d3b8fc3a1b8cec315d56503abeff7f9cc4c"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.647700 4869 generic.go:334] "Generic (PLEG): container finished" podID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerID="0131707a1906da7548d0bfe73d0df6fcdfbed65a339ebb323ac95876ba3b260a" exitCode=0 Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.647765 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerDied","Data":"0131707a1906da7548d0bfe73d0df6fcdfbed65a339ebb323ac95876ba3b260a"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.647786 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerStarted","Data":"9d5def06b6d242ff158c613ee62c1199bbc7f6e8d2fa66171e4f5b93af6eeebf"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.649437 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7127872e-e183-49cf-a8e2-153197597bea","Type":"ContainerStarted","Data":"798229fe2f6c2ad6f4db7878297b51f251a26dd713772a4cc5642d5819661c80"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.650500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6a6edbf6-4b64-4319-b863-6e9e5f08746f","Type":"ContainerStarted","Data":"97236a727c36d1dfa1964a3c062ca712d43148b16bccb77b4420876769440e6c"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.652513 4869 generic.go:334] "Generic (PLEG): container finished" podID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerID="79cbceb5aa8405afbda1edde41006bf62eaf514731108681a83c87e67caf486d" exitCode=0 Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.652861 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerDied","Data":"79cbceb5aa8405afbda1edde41006bf62eaf514731108681a83c87e67caf486d"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.652914 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerStarted","Data":"010ea47ded2440d3f3148cc670168fdd046a1dd2c0c482c7eb6406bee482b595"} Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.753202 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.753261 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5fwv7"] Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.825504 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.833030 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-fqmrn"] Jan 06 14:16:29 crc kubenswrapper[4869]: I0106 14:16:29.838237 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-64n65"] Jan 06 14:16:29 crc kubenswrapper[4869]: W0106 14:16:29.872872 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15ab1556_2fd1_423a_9759_4c1088500a85.slice/crio-2ea713fdd09245d4ced20f0641feb3e49ef74ccd86ab3f8049ad5cc9b22ef2d2 WatchSource:0}: Error finding container 2ea713fdd09245d4ced20f0641feb3e49ef74ccd86ab3f8049ad5cc9b22ef2d2: Status 404 returned error can't find the container with id 2ea713fdd09245d4ced20f0641feb3e49ef74ccd86ab3f8049ad5cc9b22ef2d2 Jan 06 14:16:30 crc kubenswrapper[4869]: I0106 14:16:30.665839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64n65" event={"ID":"15ab1556-2fd1-423a-9759-4c1088500a85","Type":"ContainerStarted","Data":"2ea713fdd09245d4ced20f0641feb3e49ef74ccd86ab3f8049ad5cc9b22ef2d2"} Jan 06 14:16:31 crc kubenswrapper[4869]: I0106 14:16:31.716988 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a" path="/var/lib/kubelet/pods/1dd20d92-cab6-4bdc-b9d6-8eac6e189f3a/volumes" Jan 06 14:16:31 crc kubenswrapper[4869]: I0106 14:16:31.718090 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d42900-dde5-4d20-973e-ba27d6cf4650" path="/var/lib/kubelet/pods/94d42900-dde5-4d20-973e-ba27d6cf4650/volumes" Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.688095 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mmg7w" event={"ID":"aaa27703-fd83-40d0-a8fb-8d6962212f8f","Type":"ContainerStarted","Data":"5d5088aec0d258dce77556b5057b7405e97c7bf6be7414fb00f8de069dbefce0"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.688846 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-mmg7w" Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.697596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerStarted","Data":"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.702402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6a6edbf6-4b64-4319-b863-6e9e5f08746f","Type":"ContainerStarted","Data":"7d84df5768765809321f6529f49d293f657cac4a43d821d74d72df756e773b83"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.707612 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-mmg7w" podStartSLOduration=27.916296326 podStartE2EDuration="31.707586142s" podCreationTimestamp="2026-01-06 14:16:02 +0000 UTC" firstStartedPulling="2026-01-06 14:16:28.707444507 +0000 UTC m=+1007.247132171" lastFinishedPulling="2026-01-06 14:16:32.498734283 +0000 UTC m=+1011.038421987" observedRunningTime="2026-01-06 14:16:33.705187642 +0000 UTC m=+1012.244875316" watchObservedRunningTime="2026-01-06 14:16:33.707586142 +0000 UTC m=+1012.247273806" Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.718515 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7127872e-e183-49cf-a8e2-153197597bea","Type":"ContainerStarted","Data":"2004369305f8e5bc075d22c749291a47a01f10e19f3abed5bb43e0d05b28bdc3"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.719200 4869 generic.go:334] "Generic (PLEG): container finished" podID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerID="d0cb2b1a90b0edc96ba7c0cc42e3163b119b0ceb2775e3bbbc58135d7b901007" exitCode=0 Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.719354 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerDied","Data":"d0cb2b1a90b0edc96ba7c0cc42e3163b119b0ceb2775e3bbbc58135d7b901007"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.730141 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64n65" event={"ID":"15ab1556-2fd1-423a-9759-4c1088500a85","Type":"ContainerStarted","Data":"c230a03d7f72ae706daa6041a1c05cb1ed0d98192c4f056c567db81ceb44bed1"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.739687 4869 generic.go:334] "Generic (PLEG): container finished" podID="860379d2-780a-49cd-8748-020b09e2fe94" containerID="80a31cf15a305a456ec3c88a3c2d5c944cc2597f5b6e40e8aeacf60eff5bd881" exitCode=0 Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.739807 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerDied","Data":"80a31cf15a305a456ec3c88a3c2d5c944cc2597f5b6e40e8aeacf60eff5bd881"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.742054 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be48d5b3-d81d-4bb6-a7a6-7706d8208db8","Type":"ContainerStarted","Data":"3240c885a0e28429670681cc1baf5b4acddb91c97ce83b455bac2bf1edd3c102"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.750571 4869 generic.go:334] "Generic (PLEG): container finished" podID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerID="9a9408ae48daaf71791de843b198acc834a3c0e2e5f3d90bc3c4881a2b3fb415" exitCode=0 Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.750744 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerDied","Data":"9a9408ae48daaf71791de843b198acc834a3c0e2e5f3d90bc3c4881a2b3fb415"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.754866 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92078172-9112-49c9-91a9-d694a11411c1","Type":"ContainerStarted","Data":"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933"} Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.755434 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 06 14:16:33 crc kubenswrapper[4869]: I0106 14:16:33.841904 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=32.045561032 podStartE2EDuration="35.841884535s" podCreationTimestamp="2026-01-06 14:15:58 +0000 UTC" firstStartedPulling="2026-01-06 14:16:28.653368495 +0000 UTC m=+1007.193056159" lastFinishedPulling="2026-01-06 14:16:32.449691998 +0000 UTC m=+1010.989379662" observedRunningTime="2026-01-06 14:16:33.82502956 +0000 UTC m=+1012.364717244" watchObservedRunningTime="2026-01-06 14:16:33.841884535 +0000 UTC m=+1012.381572199" Jan 06 14:16:34 crc kubenswrapper[4869]: I0106 14:16:34.770710 4869 generic.go:334] "Generic (PLEG): container finished" podID="738153bb-7223-4874-a1f5-c7f42c256671" containerID="892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f" exitCode=0 Jan 06 14:16:34 crc kubenswrapper[4869]: I0106 14:16:34.771192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerDied","Data":"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f"} Jan 06 14:16:34 crc kubenswrapper[4869]: I0106 14:16:34.775737 4869 generic.go:334] "Generic (PLEG): container finished" podID="15ab1556-2fd1-423a-9759-4c1088500a85" containerID="c230a03d7f72ae706daa6041a1c05cb1ed0d98192c4f056c567db81ceb44bed1" exitCode=0 Jan 06 14:16:34 crc kubenswrapper[4869]: I0106 14:16:34.776004 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64n65" event={"ID":"15ab1556-2fd1-423a-9759-4c1088500a85","Type":"ContainerDied","Data":"c230a03d7f72ae706daa6041a1c05cb1ed0d98192c4f056c567db81ceb44bed1"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.804579 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerStarted","Data":"4bcf6b138146f7b76e8825edfa3c8d6bd8d5e88fb8a826f0e5ea27c254c9b12e"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.814125 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64n65" event={"ID":"15ab1556-2fd1-423a-9759-4c1088500a85","Type":"ContainerStarted","Data":"61c6230bb1b452b6519603364a7b5eb1a9957cf89ffa78ace94874c0bdc07ec0"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.814191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-64n65" event={"ID":"15ab1556-2fd1-423a-9759-4c1088500a85","Type":"ContainerStarted","Data":"d5aba65e5822c3d90ef98542bc88f1c7e159957415282d81750c8baaf4f4ca25"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.814399 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.814418 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.823584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerStarted","Data":"f0aaad52a66c1ab08f49b2771c611043ffe377e34681765ad2310366119d9f09"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.829522 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tw58w" podStartSLOduration=23.71247229 podStartE2EDuration="28.82949937s" podCreationTimestamp="2026-01-06 14:16:07 +0000 UTC" firstStartedPulling="2026-01-06 14:16:29.659098548 +0000 UTC m=+1008.198786212" lastFinishedPulling="2026-01-06 14:16:34.776125628 +0000 UTC m=+1013.315813292" observedRunningTime="2026-01-06 14:16:35.828753841 +0000 UTC m=+1014.368441525" watchObservedRunningTime="2026-01-06 14:16:35.82949937 +0000 UTC m=+1014.369187034" Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.830922 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerStarted","Data":"708f42f9ef9e9472032888cdd8520c4888ea06cbb7e5164a488c3b7e7c7b6325"} Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.888465 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7x82b" podStartSLOduration=26.840244301 podStartE2EDuration="32.888439645s" podCreationTimestamp="2026-01-06 14:16:03 +0000 UTC" firstStartedPulling="2026-01-06 14:16:28.638750737 +0000 UTC m=+1007.178438401" lastFinishedPulling="2026-01-06 14:16:34.686946081 +0000 UTC m=+1013.226633745" observedRunningTime="2026-01-06 14:16:35.857850704 +0000 UTC m=+1014.397538378" watchObservedRunningTime="2026-01-06 14:16:35.888439645 +0000 UTC m=+1014.428127309" Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.917160 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-64n65" podStartSLOduration=31.142619153 podStartE2EDuration="33.917138009s" podCreationTimestamp="2026-01-06 14:16:02 +0000 UTC" firstStartedPulling="2026-01-06 14:16:29.874903853 +0000 UTC m=+1008.414591517" lastFinishedPulling="2026-01-06 14:16:32.649422709 +0000 UTC m=+1011.189110373" observedRunningTime="2026-01-06 14:16:35.893752979 +0000 UTC m=+1014.433440673" watchObservedRunningTime="2026-01-06 14:16:35.917138009 +0000 UTC m=+1014.456825673" Jan 06 14:16:35 crc kubenswrapper[4869]: I0106 14:16:35.922277 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9dtd6" podStartSLOduration=20.842821484 podStartE2EDuration="25.922264907s" podCreationTimestamp="2026-01-06 14:16:10 +0000 UTC" firstStartedPulling="2026-01-06 14:16:29.659110568 +0000 UTC m=+1008.198798232" lastFinishedPulling="2026-01-06 14:16:34.738553991 +0000 UTC m=+1013.278241655" observedRunningTime="2026-01-06 14:16:35.917409745 +0000 UTC m=+1014.457097409" watchObservedRunningTime="2026-01-06 14:16:35.922264907 +0000 UTC m=+1014.461952571" Jan 06 14:16:37 crc kubenswrapper[4869]: I0106 14:16:37.284472 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 06 14:16:37 crc kubenswrapper[4869]: I0106 14:16:37.723427 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:37 crc kubenswrapper[4869]: I0106 14:16:37.723476 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:37 crc kubenswrapper[4869]: I0106 14:16:37.776838 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:38 crc kubenswrapper[4869]: I0106 14:16:38.764577 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 06 14:16:40 crc kubenswrapper[4869]: I0106 14:16:40.987375 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:40 crc kubenswrapper[4869]: I0106 14:16:40.989080 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:41 crc kubenswrapper[4869]: I0106 14:16:41.037802 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:41 crc kubenswrapper[4869]: I0106 14:16:41.953361 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:42 crc kubenswrapper[4869]: I0106 14:16:42.912450 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5ecad54-1487-4d25-9bd1-e6e486ba59d5","Type":"ContainerStarted","Data":"4c6d6694d0eacc7d4615cdd5908321a8f33e64a3ca254e0ec868279b17f076e2"} Jan 06 14:16:43 crc kubenswrapper[4869]: I0106 14:16:43.582964 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:43 crc kubenswrapper[4869]: I0106 14:16:43.583394 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:43 crc kubenswrapper[4869]: I0106 14:16:43.633098 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:43 crc kubenswrapper[4869]: I0106 14:16:43.717778 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:43 crc kubenswrapper[4869]: I0106 14:16:43.981533 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:44 crc kubenswrapper[4869]: I0106 14:16:44.942056 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9dtd6" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="registry-server" containerID="cri-o://708f42f9ef9e9472032888cdd8520c4888ea06cbb7e5164a488c3b7e7c7b6325" gracePeriod=2 Jan 06 14:16:45 crc kubenswrapper[4869]: I0106 14:16:45.918453 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:45 crc kubenswrapper[4869]: I0106 14:16:45.950001 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7x82b" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="registry-server" containerID="cri-o://f0aaad52a66c1ab08f49b2771c611043ffe377e34681765ad2310366119d9f09" gracePeriod=2 Jan 06 14:16:46 crc kubenswrapper[4869]: I0106 14:16:46.960331 4869 generic.go:334] "Generic (PLEG): container finished" podID="860379d2-780a-49cd-8748-020b09e2fe94" containerID="f0aaad52a66c1ab08f49b2771c611043ffe377e34681765ad2310366119d9f09" exitCode=0 Jan 06 14:16:46 crc kubenswrapper[4869]: I0106 14:16:46.960443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerDied","Data":"f0aaad52a66c1ab08f49b2771c611043ffe377e34681765ad2310366119d9f09"} Jan 06 14:16:46 crc kubenswrapper[4869]: I0106 14:16:46.963860 4869 generic.go:334] "Generic (PLEG): container finished" podID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerID="708f42f9ef9e9472032888cdd8520c4888ea06cbb7e5164a488c3b7e7c7b6325" exitCode=0 Jan 06 14:16:46 crc kubenswrapper[4869]: I0106 14:16:46.963969 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerDied","Data":"708f42f9ef9e9472032888cdd8520c4888ea06cbb7e5164a488c3b7e7c7b6325"} Jan 06 14:16:47 crc kubenswrapper[4869]: I0106 14:16:47.777685 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:47 crc kubenswrapper[4869]: I0106 14:16:47.973974 4869 generic.go:334] "Generic (PLEG): container finished" podID="be48d5b3-d81d-4bb6-a7a6-7706d8208db8" containerID="3240c885a0e28429670681cc1baf5b4acddb91c97ce83b455bac2bf1edd3c102" exitCode=0 Jan 06 14:16:47 crc kubenswrapper[4869]: I0106 14:16:47.974306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be48d5b3-d81d-4bb6-a7a6-7706d8208db8","Type":"ContainerDied","Data":"3240c885a0e28429670681cc1baf5b4acddb91c97ce83b455bac2bf1edd3c102"} Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.648055 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.648695 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n669h59fh79h648h565h97h54dh674h9ch55ch96h5dch7ch567h689h54dhdh5c5h667h64bh575hf8hb8h77h59dh65dh5cdh696h54h5h5c5h544q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fgpx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(7127872e-e183-49cf-a8e2-153197597bea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.649853 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="7127872e-e183-49cf-a8e2-153197597bea" Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.682162 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.682355 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n56bh99h68h677h56dh5d7h5c4h9bh685h545h85h54dh698h66ch565hf9h97h688h5d6h649h87h5b7h544h7fh586h676h65h67fhf5h5fh55fhb6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzdp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(6a6edbf6-4b64-4319-b863-6e9e5f08746f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:16:49 crc kubenswrapper[4869]: E0106 14:16:49.683655 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="6a6edbf6-4b64-4319-b863-6e9e5f08746f" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.779524 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.859583 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities\") pod \"860379d2-780a-49cd-8748-020b09e2fe94\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.860047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxnj9\" (UniqueName: \"kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9\") pod \"860379d2-780a-49cd-8748-020b09e2fe94\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.860126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content\") pod \"860379d2-780a-49cd-8748-020b09e2fe94\" (UID: \"860379d2-780a-49cd-8748-020b09e2fe94\") " Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.860594 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities" (OuterVolumeSpecName: "utilities") pod "860379d2-780a-49cd-8748-020b09e2fe94" (UID: "860379d2-780a-49cd-8748-020b09e2fe94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.876834 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9" (OuterVolumeSpecName: "kube-api-access-mxnj9") pod "860379d2-780a-49cd-8748-020b09e2fe94" (UID: "860379d2-780a-49cd-8748-020b09e2fe94"). InnerVolumeSpecName "kube-api-access-mxnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.915287 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "860379d2-780a-49cd-8748-020b09e2fe94" (UID: "860379d2-780a-49cd-8748-020b09e2fe94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.962106 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxnj9\" (UniqueName: \"kubernetes.io/projected/860379d2-780a-49cd-8748-020b09e2fe94-kube-api-access-mxnj9\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.962141 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.962153 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/860379d2-780a-49cd-8748-020b09e2fe94-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.973334 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.993916 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7x82b" event={"ID":"860379d2-780a-49cd-8748-020b09e2fe94","Type":"ContainerDied","Data":"f2cc4cb0b02d0320e73c9293567cca8aac615561090af70efc189ce1b59ed7e0"} Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.993989 4869 scope.go:117] "RemoveContainer" containerID="f0aaad52a66c1ab08f49b2771c611043ffe377e34681765ad2310366119d9f09" Jan 06 14:16:49 crc kubenswrapper[4869]: I0106 14:16:49.994204 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7x82b" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.000888 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerStarted","Data":"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8"} Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.004641 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9dtd6" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.004814 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9dtd6" event={"ID":"efb637a6-5723-4a78-9f13-8f08edbc01bb","Type":"ContainerDied","Data":"010ea47ded2440d3f3148cc670168fdd046a1dd2c0c482c7eb6406bee482b595"} Jan 06 14:16:50 crc kubenswrapper[4869]: E0106 14:16:50.017929 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7127872e-e183-49cf-a8e2-153197597bea" Jan 06 14:16:50 crc kubenswrapper[4869]: E0106 14:16:50.018016 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="6a6edbf6-4b64-4319-b863-6e9e5f08746f" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.018097 4869 scope.go:117] "RemoveContainer" containerID="80a31cf15a305a456ec3c88a3c2d5c944cc2597f5b6e40e8aeacf60eff5bd881" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.035938 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m4j6q" podStartSLOduration=30.9823607 podStartE2EDuration="51.035918798s" podCreationTimestamp="2026-01-06 14:15:59 +0000 UTC" firstStartedPulling="2026-01-06 14:16:29.659516998 +0000 UTC m=+1008.199204662" lastFinishedPulling="2026-01-06 14:16:49.713075096 +0000 UTC m=+1028.252762760" observedRunningTime="2026-01-06 14:16:50.034823171 +0000 UTC m=+1028.574510835" watchObservedRunningTime="2026-01-06 14:16:50.035918798 +0000 UTC m=+1028.575606462" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.063379 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities\") pod \"efb637a6-5723-4a78-9f13-8f08edbc01bb\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.063934 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content\") pod \"efb637a6-5723-4a78-9f13-8f08edbc01bb\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.064013 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xb2v\" (UniqueName: \"kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v\") pod \"efb637a6-5723-4a78-9f13-8f08edbc01bb\" (UID: \"efb637a6-5723-4a78-9f13-8f08edbc01bb\") " Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.064537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities" (OuterVolumeSpecName: "utilities") pod "efb637a6-5723-4a78-9f13-8f08edbc01bb" (UID: "efb637a6-5723-4a78-9f13-8f08edbc01bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.067152 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.071790 4869 scope.go:117] "RemoveContainer" containerID="8de89109f22b848894c7e7b85dfc9ee76aba665721f90cdaaba3a92ff48274a3" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.082274 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.096022 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7x82b"] Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.117610 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efb637a6-5723-4a78-9f13-8f08edbc01bb" (UID: "efb637a6-5723-4a78-9f13-8f08edbc01bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.160236 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v" (OuterVolumeSpecName: "kube-api-access-2xb2v") pod "efb637a6-5723-4a78-9f13-8f08edbc01bb" (UID: "efb637a6-5723-4a78-9f13-8f08edbc01bb"). InnerVolumeSpecName "kube-api-access-2xb2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.169130 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.169170 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.170332 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xb2v\" (UniqueName: \"kubernetes.io/projected/efb637a6-5723-4a78-9f13-8f08edbc01bb-kube-api-access-2xb2v\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.170364 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efb637a6-5723-4a78-9f13-8f08edbc01bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.181844 4869 scope.go:117] "RemoveContainer" containerID="708f42f9ef9e9472032888cdd8520c4888ea06cbb7e5164a488c3b7e7c7b6325" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.198010 4869 scope.go:117] "RemoveContainer" containerID="9a9408ae48daaf71791de843b198acc834a3c0e2e5f3d90bc3c4881a2b3fb415" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.213413 4869 scope.go:117] "RemoveContainer" containerID="79cbceb5aa8405afbda1edde41006bf62eaf514731108681a83c87e67caf486d" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.337192 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.349456 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9dtd6"] Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.672432 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.672481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.716806 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.717024 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tw58w" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="registry-server" containerID="cri-o://4bcf6b138146f7b76e8825edfa3c8d6bd8d5e88fb8a826f0e5ea27c254c9b12e" gracePeriod=2 Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.722473 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.949043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:50 crc kubenswrapper[4869]: I0106 14:16:50.949117 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.001206 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.015775 4869 generic.go:334] "Generic (PLEG): container finished" podID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerID="4bcf6b138146f7b76e8825edfa3c8d6bd8d5e88fb8a826f0e5ea27c254c9b12e" exitCode=0 Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.015839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerDied","Data":"4bcf6b138146f7b76e8825edfa3c8d6bd8d5e88fb8a826f0e5ea27c254c9b12e"} Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.017793 4869 generic.go:334] "Generic (PLEG): container finished" podID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerID="33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2" exitCode=0 Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.017854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" event={"ID":"2def269d-7d12-409c-9513-8d3bc8aeba7f","Type":"ContainerDied","Data":"33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2"} Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.035782 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"be48d5b3-d81d-4bb6-a7a6-7706d8208db8","Type":"ContainerStarted","Data":"de0ddb418c4eb216af8838d7c368b0cdcb71ff8fdb34b65783214a638e74d2df"} Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.038585 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerStarted","Data":"b4f041e0f9531bfc6e45c8260345558c6a4ae855d64eb9a899f864051195a0a2"} Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.044106 4869 generic.go:334] "Generic (PLEG): container finished" podID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerID="ebaa8c1959da9cc3c5146e161cde0de32ac60422d36616024857ecfb58a0ff1a" exitCode=0 Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.044443 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" event={"ID":"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94","Type":"ContainerDied","Data":"ebaa8c1959da9cc3c5146e161cde0de32ac60422d36616024857ecfb58a0ff1a"} Jan 06 14:16:51 crc kubenswrapper[4869]: E0106 14:16:51.046160 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="6a6edbf6-4b64-4319-b863-6e9e5f08746f" Jan 06 14:16:51 crc kubenswrapper[4869]: E0106 14:16:51.048333 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7127872e-e183-49cf-a8e2-153197597bea" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.087497 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.461733765 podStartE2EDuration="58.087477155s" podCreationTimestamp="2026-01-06 14:15:53 +0000 UTC" firstStartedPulling="2026-01-06 14:15:56.062030915 +0000 UTC m=+974.601718579" lastFinishedPulling="2026-01-06 14:16:32.687774305 +0000 UTC m=+1011.227461969" observedRunningTime="2026-01-06 14:16:51.067896522 +0000 UTC m=+1029.607584196" watchObservedRunningTime="2026-01-06 14:16:51.087477155 +0000 UTC m=+1029.627164819" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.109875 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.112624 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.187813 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.206994 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m4j6q" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="registry-server" probeResult="failure" output=< Jan 06 14:16:51 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 06 14:16:51 crc kubenswrapper[4869]: > Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.288376 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content\") pod \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.288480 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhs97\" (UniqueName: \"kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97\") pod \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.288501 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities\") pod \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\" (UID: \"8b636146-6102-4a1b-8dc5-4cc1f737a31e\") " Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.289506 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities" (OuterVolumeSpecName: "utilities") pod "8b636146-6102-4a1b-8dc5-4cc1f737a31e" (UID: "8b636146-6102-4a1b-8dc5-4cc1f737a31e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.292619 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97" (OuterVolumeSpecName: "kube-api-access-vhs97") pod "8b636146-6102-4a1b-8dc5-4cc1f737a31e" (UID: "8b636146-6102-4a1b-8dc5-4cc1f737a31e"). InnerVolumeSpecName "kube-api-access-vhs97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.307531 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b636146-6102-4a1b-8dc5-4cc1f737a31e" (UID: "8b636146-6102-4a1b-8dc5-4cc1f737a31e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.390458 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.390498 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhs97\" (UniqueName: \"kubernetes.io/projected/8b636146-6102-4a1b-8dc5-4cc1f737a31e-kube-api-access-vhs97\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.390514 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b636146-6102-4a1b-8dc5-4cc1f737a31e-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.716001 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860379d2-780a-49cd-8748-020b09e2fe94" path="/var/lib/kubelet/pods/860379d2-780a-49cd-8748-020b09e2fe94/volumes" Jan 06 14:16:51 crc kubenswrapper[4869]: I0106 14:16:51.717854 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" path="/var/lib/kubelet/pods/efb637a6-5723-4a78-9f13-8f08edbc01bb/volumes" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.053871 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" event={"ID":"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94","Type":"ContainerStarted","Data":"3005a014a00bf7cf97ef7a85ca51bf86ed12fdbb9d3f6ace639ab6297d41c8a1"} Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.055095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.056734 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tw58w" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.056650 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tw58w" event={"ID":"8b636146-6102-4a1b-8dc5-4cc1f737a31e","Type":"ContainerDied","Data":"9d5def06b6d242ff158c613ee62c1199bbc7f6e8d2fa66171e4f5b93af6eeebf"} Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.057061 4869 scope.go:117] "RemoveContainer" containerID="4bcf6b138146f7b76e8825edfa3c8d6bd8d5e88fb8a826f0e5ea27c254c9b12e" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.061552 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" event={"ID":"2def269d-7d12-409c-9513-8d3bc8aeba7f","Type":"ContainerStarted","Data":"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f"} Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.062290 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.068268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerStarted","Data":"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647"} Jan 06 14:16:52 crc kubenswrapper[4869]: E0106 14:16:52.078048 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="6a6edbf6-4b64-4319-b863-6e9e5f08746f" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.078068 4869 scope.go:117] "RemoveContainer" containerID="d0cb2b1a90b0edc96ba7c0cc42e3163b119b0ceb2775e3bbbc58135d7b901007" Jan 06 14:16:52 crc kubenswrapper[4869]: E0106 14:16:52.078270 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7127872e-e183-49cf-a8e2-153197597bea" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.098560 4869 scope.go:117] "RemoveContainer" containerID="0131707a1906da7548d0bfe73d0df6fcdfbed65a339ebb323ac95876ba3b260a" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.102776 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" podStartSLOduration=3.848284426 podStartE2EDuration="1m0.102645555s" podCreationTimestamp="2026-01-06 14:15:52 +0000 UTC" firstStartedPulling="2026-01-06 14:15:53.551638778 +0000 UTC m=+972.091326442" lastFinishedPulling="2026-01-06 14:16:49.805999907 +0000 UTC m=+1028.345687571" observedRunningTime="2026-01-06 14:16:52.099470895 +0000 UTC m=+1030.639158559" watchObservedRunningTime="2026-01-06 14:16:52.102645555 +0000 UTC m=+1030.642333209" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.103844 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" podStartSLOduration=4.095528414 podStartE2EDuration="1m0.103836315s" podCreationTimestamp="2026-01-06 14:15:52 +0000 UTC" firstStartedPulling="2026-01-06 14:15:53.705983926 +0000 UTC m=+972.245671590" lastFinishedPulling="2026-01-06 14:16:49.714291827 +0000 UTC m=+1028.253979491" observedRunningTime="2026-01-06 14:16:52.080324754 +0000 UTC m=+1030.620012438" watchObservedRunningTime="2026-01-06 14:16:52.103836315 +0000 UTC m=+1030.643523979" Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.142006 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:52 crc kubenswrapper[4869]: I0106 14:16:52.148296 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tw58w"] Jan 06 14:16:53 crc kubenswrapper[4869]: I0106 14:16:53.076581 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5ecad54-1487-4d25-9bd1-e6e486ba59d5" containerID="4c6d6694d0eacc7d4615cdd5908321a8f33e64a3ca254e0ec868279b17f076e2" exitCode=0 Jan 06 14:16:53 crc kubenswrapper[4869]: I0106 14:16:53.076677 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5ecad54-1487-4d25-9bd1-e6e486ba59d5","Type":"ContainerDied","Data":"4c6d6694d0eacc7d4615cdd5908321a8f33e64a3ca254e0ec868279b17f076e2"} Jan 06 14:16:53 crc kubenswrapper[4869]: E0106 14:16:53.079653 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7127872e-e183-49cf-a8e2-153197597bea" Jan 06 14:16:53 crc kubenswrapper[4869]: E0106 14:16:53.080379 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="6a6edbf6-4b64-4319-b863-6e9e5f08746f" Jan 06 14:16:53 crc kubenswrapper[4869]: I0106 14:16:53.715755 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" path="/var/lib/kubelet/pods/8b636146-6102-4a1b-8dc5-4cc1f737a31e/volumes" Jan 06 14:16:54 crc kubenswrapper[4869]: I0106 14:16:54.086530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b5ecad54-1487-4d25-9bd1-e6e486ba59d5","Type":"ContainerStarted","Data":"1e2ede0d5cccb432bd2ffd48ff52149ef3267b36a2fa603aa436c541c50bc44e"} Jan 06 14:16:54 crc kubenswrapper[4869]: I0106 14:16:54.110290 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371977.744505 podStartE2EDuration="59.110271664s" podCreationTimestamp="2026-01-06 14:15:55 +0000 UTC" firstStartedPulling="2026-01-06 14:15:57.57339952 +0000 UTC m=+976.113087184" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:16:54.103315639 +0000 UTC m=+1032.643003303" watchObservedRunningTime="2026-01-06 14:16:54.110271664 +0000 UTC m=+1032.649959318" Jan 06 14:16:55 crc kubenswrapper[4869]: I0106 14:16:55.320398 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 06 14:16:55 crc kubenswrapper[4869]: I0106 14:16:55.320877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 06 14:16:55 crc kubenswrapper[4869]: I0106 14:16:55.410956 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 06 14:16:56 crc kubenswrapper[4869]: I0106 14:16:56.201824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 06 14:16:56 crc kubenswrapper[4869]: I0106 14:16:56.948635 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 06 14:16:56 crc kubenswrapper[4869]: I0106 14:16:56.948728 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 06 14:16:57 crc kubenswrapper[4869]: I0106 14:16:57.900927 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.062832 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.124528 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.124951 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="dnsmasq-dns" containerID="cri-o://39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f" gracePeriod=10 Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.603803 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.717202 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc\") pod \"2def269d-7d12-409c-9513-8d3bc8aeba7f\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.717845 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrjg\" (UniqueName: \"kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg\") pod \"2def269d-7d12-409c-9513-8d3bc8aeba7f\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.717956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config\") pod \"2def269d-7d12-409c-9513-8d3bc8aeba7f\" (UID: \"2def269d-7d12-409c-9513-8d3bc8aeba7f\") " Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.733796 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg" (OuterVolumeSpecName: "kube-api-access-tvrjg") pod "2def269d-7d12-409c-9513-8d3bc8aeba7f" (UID: "2def269d-7d12-409c-9513-8d3bc8aeba7f"). InnerVolumeSpecName "kube-api-access-tvrjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.758798 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config" (OuterVolumeSpecName: "config") pod "2def269d-7d12-409c-9513-8d3bc8aeba7f" (UID: "2def269d-7d12-409c-9513-8d3bc8aeba7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.758909 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2def269d-7d12-409c-9513-8d3bc8aeba7f" (UID: "2def269d-7d12-409c-9513-8d3bc8aeba7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.821186 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.821237 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrjg\" (UniqueName: \"kubernetes.io/projected/2def269d-7d12-409c-9513-8d3bc8aeba7f-kube-api-access-tvrjg\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:58 crc kubenswrapper[4869]: I0106 14:16:58.821253 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2def269d-7d12-409c-9513-8d3bc8aeba7f-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.131497 4869 generic.go:334] "Generic (PLEG): container finished" podID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerID="39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f" exitCode=0 Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.131580 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.131558 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" event={"ID":"2def269d-7d12-409c-9513-8d3bc8aeba7f","Type":"ContainerDied","Data":"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f"} Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.132968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-trwtt" event={"ID":"2def269d-7d12-409c-9513-8d3bc8aeba7f","Type":"ContainerDied","Data":"e3534a249aeb16c2f93c29c6000fdc92edeb7cef994665f2602577a175303a4f"} Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.133005 4869 scope.go:117] "RemoveContainer" containerID="39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.168030 4869 scope.go:117] "RemoveContainer" containerID="33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.178859 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.192274 4869 scope.go:117] "RemoveContainer" containerID="39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f" Jan 06 14:16:59 crc kubenswrapper[4869]: E0106 14:16:59.192967 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f\": container with ID starting with 39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f not found: ID does not exist" containerID="39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.193010 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f"} err="failed to get container status \"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f\": rpc error: code = NotFound desc = could not find container \"39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f\": container with ID starting with 39da709c11619d000a86755753bb4a6f5776e4a946bc872ed08deb12d945668f not found: ID does not exist" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.193032 4869 scope.go:117] "RemoveContainer" containerID="33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2" Jan 06 14:16:59 crc kubenswrapper[4869]: E0106 14:16:59.193448 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2\": container with ID starting with 33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2 not found: ID does not exist" containerID="33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.193469 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2"} err="failed to get container status \"33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2\": rpc error: code = NotFound desc = could not find container \"33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2\": container with ID starting with 33aa82edce4b123b4c059c6859c3a78db4b436ce04345ee08f19ac084f593da2 not found: ID does not exist" Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.196236 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-trwtt"] Jan 06 14:16:59 crc kubenswrapper[4869]: I0106 14:16:59.726004 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" path="/var/lib/kubelet/pods/2def269d-7d12-409c-9513-8d3bc8aeba7f/volumes" Jan 06 14:17:00 crc kubenswrapper[4869]: I0106 14:17:00.221977 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:17:00 crc kubenswrapper[4869]: I0106 14:17:00.276122 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:17:00 crc kubenswrapper[4869]: I0106 14:17:00.457013 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:17:01 crc kubenswrapper[4869]: I0106 14:17:01.534160 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 06 14:17:01 crc kubenswrapper[4869]: I0106 14:17:01.625343 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.156908 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m4j6q" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="registry-server" containerID="cri-o://f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8" gracePeriod=2 Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.633364 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.692573 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content\") pod \"738153bb-7223-4874-a1f5-c7f42c256671\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.692714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities\") pod \"738153bb-7223-4874-a1f5-c7f42c256671\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.692884 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrprt\" (UniqueName: \"kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt\") pod \"738153bb-7223-4874-a1f5-c7f42c256671\" (UID: \"738153bb-7223-4874-a1f5-c7f42c256671\") " Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.694633 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities" (OuterVolumeSpecName: "utilities") pod "738153bb-7223-4874-a1f5-c7f42c256671" (UID: "738153bb-7223-4874-a1f5-c7f42c256671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.703869 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt" (OuterVolumeSpecName: "kube-api-access-qrprt") pod "738153bb-7223-4874-a1f5-c7f42c256671" (UID: "738153bb-7223-4874-a1f5-c7f42c256671"). InnerVolumeSpecName "kube-api-access-qrprt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.796165 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrprt\" (UniqueName: \"kubernetes.io/projected/738153bb-7223-4874-a1f5-c7f42c256671-kube-api-access-qrprt\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.796204 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.801843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "738153bb-7223-4874-a1f5-c7f42c256671" (UID: "738153bb-7223-4874-a1f5-c7f42c256671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:17:02 crc kubenswrapper[4869]: I0106 14:17:02.898020 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/738153bb-7223-4874-a1f5-c7f42c256671-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.172088 4869 generic.go:334] "Generic (PLEG): container finished" podID="738153bb-7223-4874-a1f5-c7f42c256671" containerID="f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8" exitCode=0 Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.172167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerDied","Data":"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8"} Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.172231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m4j6q" event={"ID":"738153bb-7223-4874-a1f5-c7f42c256671","Type":"ContainerDied","Data":"26f17b0daa15d8c29e8d7e410b590cb16105337d46fa084b3505f897f041f478"} Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.172275 4869 scope.go:117] "RemoveContainer" containerID="f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.172290 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m4j6q" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.202845 4869 scope.go:117] "RemoveContainer" containerID="892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.228488 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.249233 4869 scope.go:117] "RemoveContainer" containerID="a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.254060 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m4j6q"] Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.270531 4869 scope.go:117] "RemoveContainer" containerID="f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8" Jan 06 14:17:03 crc kubenswrapper[4869]: E0106 14:17:03.271054 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8\": container with ID starting with f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8 not found: ID does not exist" containerID="f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.271233 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8"} err="failed to get container status \"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8\": rpc error: code = NotFound desc = could not find container \"f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8\": container with ID starting with f31aecade318392540029ff3cf9405612b269d56156cf231963f826ce31ca3c8 not found: ID does not exist" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.271331 4869 scope.go:117] "RemoveContainer" containerID="892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f" Jan 06 14:17:03 crc kubenswrapper[4869]: E0106 14:17:03.271824 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f\": container with ID starting with 892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f not found: ID does not exist" containerID="892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.271863 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f"} err="failed to get container status \"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f\": rpc error: code = NotFound desc = could not find container \"892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f\": container with ID starting with 892a479b1f05cfb197b7e2b629ba35998b2aa0c84f4cb7835ca857530e6d249f not found: ID does not exist" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.271887 4869 scope.go:117] "RemoveContainer" containerID="a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8" Jan 06 14:17:03 crc kubenswrapper[4869]: E0106 14:17:03.272261 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8\": container with ID starting with a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8 not found: ID does not exist" containerID="a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.272318 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8"} err="failed to get container status \"a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8\": rpc error: code = NotFound desc = could not find container \"a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8\": container with ID starting with a692aadc13b47262d218d0877b1d22f8791b0776c14633bbef895560c088eaf8 not found: ID does not exist" Jan 06 14:17:03 crc kubenswrapper[4869]: I0106 14:17:03.716701 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738153bb-7223-4874-a1f5-c7f42c256671" path="/var/lib/kubelet/pods/738153bb-7223-4874-a1f5-c7f42c256671/volumes" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.094873 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7cqhv"] Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095160 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095172 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095196 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="init" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095203 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="init" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095220 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095227 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095237 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095244 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095251 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095256 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095273 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095280 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095292 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095297 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095309 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095315 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095323 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095329 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095339 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095344 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="extract-utilities" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095357 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095364 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095375 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095381 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095389 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="dnsmasq-dns" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095395 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="dnsmasq-dns" Jan 06 14:17:04 crc kubenswrapper[4869]: E0106 14:17:04.095401 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095407 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="extract-content" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095531 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2def269d-7d12-409c-9513-8d3bc8aeba7f" containerName="dnsmasq-dns" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095548 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b636146-6102-4a1b-8dc5-4cc1f737a31e" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095559 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="860379d2-780a-49cd-8748-020b09e2fe94" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095571 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="efb637a6-5723-4a78-9f13-8f08edbc01bb" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.095579 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="738153bb-7223-4874-a1f5-c7f42c256671" containerName="registry-server" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.096147 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.099555 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.113047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7cqhv"] Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.220936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxvr\" (UniqueName: \"kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.220979 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.322679 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxvr\" (UniqueName: \"kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.322733 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.323678 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.341702 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxvr\" (UniqueName: \"kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr\") pod \"root-account-create-update-7cqhv\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.418374 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:04 crc kubenswrapper[4869]: I0106 14:17:04.851303 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7cqhv"] Jan 06 14:17:04 crc kubenswrapper[4869]: W0106 14:17:04.852110 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17f6c22c_6057_432d_b208_7d80e8accda8.slice/crio-974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e WatchSource:0}: Error finding container 974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e: Status 404 returned error can't find the container with id 974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.191041 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6a6edbf6-4b64-4319-b863-6e9e5f08746f","Type":"ContainerStarted","Data":"790899a7e231cf509192c7289cf2b19b5843fc29b79a2c39828b4b4767502f9f"} Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.192482 4869 generic.go:334] "Generic (PLEG): container finished" podID="17f6c22c-6057-432d-b208-7d80e8accda8" containerID="379510929f2b336c0c77e9b2c5dd736a6c522b38e1119b717ce36452c40dd080" exitCode=0 Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.192564 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7cqhv" event={"ID":"17f6c22c-6057-432d-b208-7d80e8accda8","Type":"ContainerDied","Data":"379510929f2b336c0c77e9b2c5dd736a6c522b38e1119b717ce36452c40dd080"} Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.192611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7cqhv" event={"ID":"17f6c22c-6057-432d-b208-7d80e8accda8","Type":"ContainerStarted","Data":"974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e"} Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.194146 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7127872e-e183-49cf-a8e2-153197597bea","Type":"ContainerStarted","Data":"e32ac8c506b0fa878566c732e1ab9037900b903ae49bca716c520fdae66fc68c"} Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.227023 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.897789219 podStartE2EDuration="1m1.226994166s" podCreationTimestamp="2026-01-06 14:16:04 +0000 UTC" firstStartedPulling="2026-01-06 14:16:28.802193714 +0000 UTC m=+1007.341881378" lastFinishedPulling="2026-01-06 14:17:04.131398661 +0000 UTC m=+1042.671086325" observedRunningTime="2026-01-06 14:17:05.218474812 +0000 UTC m=+1043.758162496" watchObservedRunningTime="2026-01-06 14:17:05.226994166 +0000 UTC m=+1043.766681830" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.254335 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=57.472913938 podStartE2EDuration="1m1.254310085s" podCreationTimestamp="2026-01-06 14:16:04 +0000 UTC" firstStartedPulling="2026-01-06 14:16:28.717330636 +0000 UTC m=+1007.257018300" lastFinishedPulling="2026-01-06 14:16:32.498726783 +0000 UTC m=+1011.038414447" observedRunningTime="2026-01-06 14:17:05.25214331 +0000 UTC m=+1043.791830984" watchObservedRunningTime="2026-01-06 14:17:05.254310085 +0000 UTC m=+1043.793997749" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.402472 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-w2lmg"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.405306 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.409918 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.426997 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-w2lmg"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.442964 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.443021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.443075 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77ql\" (UniqueName: \"kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.443097 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.457732 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-l72g5"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.459169 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.465804 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.466012 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l72g5"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545263 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovs-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545333 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545469 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovn-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545524 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxl4\" (UniqueName: \"kubernetes.io/projected/580a8eb0-0af1-4e26-922f-2714f581c604-kube-api-access-4nxl4\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t77ql\" (UniqueName: \"kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545802 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580a8eb0-0af1-4e26-922f-2714f581c604-config\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.545984 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.546218 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-combined-ca-bundle\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.546389 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.546528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.546916 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.547143 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.551679 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-w2lmg"] Jan 06 14:17:05 crc kubenswrapper[4869]: E0106 14:17:05.552280 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-t77ql], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" podUID="04a41a9f-c626-4ced-b16b-334977932f11" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.568623 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.570177 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.581278 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.582916 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77ql\" (UniqueName: \"kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql\") pod \"dnsmasq-dns-7f896c8c65-w2lmg\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.583936 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.584169 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.588257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-s7j8b" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.598785 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.648533 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovs-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.648934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovn-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.649033 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxl4\" (UniqueName: \"kubernetes.io/projected/580a8eb0-0af1-4e26-922f-2714f581c604-kube-api-access-4nxl4\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.649178 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580a8eb0-0af1-4e26-922f-2714f581c604-config\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.649299 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.649415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-combined-ca-bundle\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.649939 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovn-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.652063 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.654369 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.654610 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/580a8eb0-0af1-4e26-922f-2714f581c604-config\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.655081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/580a8eb0-0af1-4e26-922f-2714f581c604-ovs-rundir\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.667428 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-combined-ca-bundle\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.669257 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/580a8eb0-0af1-4e26-922f-2714f581c604-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.670263 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.704887 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxl4\" (UniqueName: \"kubernetes.io/projected/580a8eb0-0af1-4e26-922f-2714f581c604-kube-api-access-4nxl4\") pod \"ovn-controller-metrics-l72g5\" (UID: \"580a8eb0-0af1-4e26-922f-2714f581c604\") " pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhqpq\" (UniqueName: \"kubernetes.io/projected/025cc3c7-4b77-4fcf-b05e-4abde5125639-kube-api-access-bhqpq\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-scripts\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750583 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-config\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.750801 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.759595 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.788870 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-l72g5" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljcxx\" (UniqueName: \"kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851873 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851904 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-config\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.851953 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852048 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852095 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhqpq\" (UniqueName: \"kubernetes.io/projected/025cc3c7-4b77-4fcf-b05e-4abde5125639-kube-api-access-bhqpq\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.852123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-scripts\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.853437 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.853755 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-scripts\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.854521 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/025cc3c7-4b77-4fcf-b05e-4abde5125639-config\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.857027 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.857036 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.857691 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025cc3c7-4b77-4fcf-b05e-4abde5125639-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.878063 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhqpq\" (UniqueName: \"kubernetes.io/projected/025cc3c7-4b77-4fcf-b05e-4abde5125639-kube-api-access-bhqpq\") pod \"ovn-northd-0\" (UID: \"025cc3c7-4b77-4fcf-b05e-4abde5125639\") " pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.942318 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.953135 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.953585 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljcxx\" (UniqueName: \"kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.953637 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.953727 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.953770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.954888 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.954945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.955001 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.955697 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:05 crc kubenswrapper[4869]: I0106 14:17:05.972072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljcxx\" (UniqueName: \"kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx\") pod \"dnsmasq-dns-86db49b7ff-92ks9\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.125733 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.204228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.223037 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.308169 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-l72g5"] Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.362081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t77ql\" (UniqueName: \"kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql\") pod \"04a41a9f-c626-4ced-b16b-334977932f11\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.362562 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc\") pod \"04a41a9f-c626-4ced-b16b-334977932f11\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.362617 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb\") pod \"04a41a9f-c626-4ced-b16b-334977932f11\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.362660 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config\") pod \"04a41a9f-c626-4ced-b16b-334977932f11\" (UID: \"04a41a9f-c626-4ced-b16b-334977932f11\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.363404 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config" (OuterVolumeSpecName: "config") pod "04a41a9f-c626-4ced-b16b-334977932f11" (UID: "04a41a9f-c626-4ced-b16b-334977932f11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.363831 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "04a41a9f-c626-4ced-b16b-334977932f11" (UID: "04a41a9f-c626-4ced-b16b-334977932f11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.363828 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04a41a9f-c626-4ced-b16b-334977932f11" (UID: "04a41a9f-c626-4ced-b16b-334977932f11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.368267 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql" (OuterVolumeSpecName: "kube-api-access-t77ql") pod "04a41a9f-c626-4ced-b16b-334977932f11" (UID: "04a41a9f-c626-4ced-b16b-334977932f11"). InnerVolumeSpecName "kube-api-access-t77ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.443089 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.468852 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.468917 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.468931 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a41a9f-c626-4ced-b16b-334977932f11-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.468946 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t77ql\" (UniqueName: \"kubernetes.io/projected/04a41a9f-c626-4ced-b16b-334977932f11-kube-api-access-t77ql\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.589428 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:06 crc kubenswrapper[4869]: W0106 14:17:06.595177 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e7566f7_7393_481e_bac9_db0f5c880b46.slice/crio-ad11b1a2b80aa69c4dd0aca0361ab3453ebd5123361f3dfe5d87b000d2ef73c0 WatchSource:0}: Error finding container ad11b1a2b80aa69c4dd0aca0361ab3453ebd5123361f3dfe5d87b000d2ef73c0: Status 404 returned error can't find the container with id ad11b1a2b80aa69c4dd0aca0361ab3453ebd5123361f3dfe5d87b000d2ef73c0 Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.620345 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.748083 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-7jvwr"] Jan 06 14:17:06 crc kubenswrapper[4869]: E0106 14:17:06.748472 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f6c22c-6057-432d-b208-7d80e8accda8" containerName="mariadb-account-create-update" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.748499 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f6c22c-6057-432d-b208-7d80e8accda8" containerName="mariadb-account-create-update" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.748738 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f6c22c-6057-432d-b208-7d80e8accda8" containerName="mariadb-account-create-update" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.749377 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.755138 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7jvwr"] Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.774519 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shxvr\" (UniqueName: \"kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr\") pod \"17f6c22c-6057-432d-b208-7d80e8accda8\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.774691 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts\") pod \"17f6c22c-6057-432d-b208-7d80e8accda8\" (UID: \"17f6c22c-6057-432d-b208-7d80e8accda8\") " Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.775621 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17f6c22c-6057-432d-b208-7d80e8accda8" (UID: "17f6c22c-6057-432d-b208-7d80e8accda8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.778747 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr" (OuterVolumeSpecName: "kube-api-access-shxvr") pod "17f6c22c-6057-432d-b208-7d80e8accda8" (UID: "17f6c22c-6057-432d-b208-7d80e8accda8"). InnerVolumeSpecName "kube-api-access-shxvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.869160 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-9e7b-account-create-update-2xfb5"] Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.870312 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.872498 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.876031 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.876219 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdct7\" (UniqueName: \"kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.876301 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shxvr\" (UniqueName: \"kubernetes.io/projected/17f6c22c-6057-432d-b208-7d80e8accda8-kube-api-access-shxvr\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.876317 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17f6c22c-6057-432d-b208-7d80e8accda8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.878142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e7b-account-create-update-2xfb5"] Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.977722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.977806 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdct7\" (UniqueName: \"kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.977840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.977955 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57p9\" (UniqueName: \"kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.979258 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:06 crc kubenswrapper[4869]: I0106 14:17:06.995320 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdct7\" (UniqueName: \"kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7\") pod \"keystone-db-create-7jvwr\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.079239 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.079932 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57p9\" (UniqueName: \"kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.080005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.080856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.082190 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2p5s8"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.083488 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.090043 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2p5s8"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.113840 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57p9\" (UniqueName: \"kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9\") pod \"keystone-9e7b-account-create-update-2xfb5\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.217232 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.242097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"025cc3c7-4b77-4fcf-b05e-4abde5125639","Type":"ContainerStarted","Data":"8add892f5323a64ff63d580d8cf0c32e4d2d56108b55d74739baed913435a418"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.246965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l72g5" event={"ID":"580a8eb0-0af1-4e26-922f-2714f581c604","Type":"ContainerStarted","Data":"bc474583d271b6735ccdddae334d24733676aa76c4ce062d3a8349c3aaf1093c"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.247021 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-l72g5" event={"ID":"580a8eb0-0af1-4e26-922f-2714f581c604","Type":"ContainerStarted","Data":"28923c21c154935e458665c1c439c695bc8112439f2b209ae1aa01a0b01d6ab0"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.255584 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1cd5-account-create-update-chlpx"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.256766 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.276589 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7cqhv" event={"ID":"17f6c22c-6057-432d-b208-7d80e8accda8","Type":"ContainerDied","Data":"974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.276650 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974afdb1509aa2e55ed048fe652ad022272894e46e99b29dbda20433d030f88e" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.276677 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7cqhv" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.282553 4869 generic.go:334] "Generic (PLEG): container finished" podID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerID="21ef037692ad5f98a490a550a7e2853e65723a4761dea31c7300997ebed79b02" exitCode=0 Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.282708 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-w2lmg" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.282903 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" event={"ID":"1e7566f7-7393-481e-bac9-db0f5c880b46","Type":"ContainerDied","Data":"21ef037692ad5f98a490a550a7e2853e65723a4761dea31c7300997ebed79b02"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.282973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" event={"ID":"1e7566f7-7393-481e-bac9-db0f5c880b46","Type":"ContainerStarted","Data":"ad11b1a2b80aa69c4dd0aca0361ab3453ebd5123361f3dfe5d87b000d2ef73c0"} Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.283183 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.283249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blmbl\" (UniqueName: \"kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.291701 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1cd5-account-create-update-chlpx"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.301969 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-l72g5" podStartSLOduration=2.301950501 podStartE2EDuration="2.301950501s" podCreationTimestamp="2026-01-06 14:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:17:07.277892865 +0000 UTC m=+1045.817580539" watchObservedRunningTime="2026-01-06 14:17:07.301950501 +0000 UTC m=+1045.841638165" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.330371 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.384461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6hw\" (UniqueName: \"kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.384792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.384846 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.384875 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blmbl\" (UniqueName: \"kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.386058 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.394934 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-w2lmg"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.402152 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-w2lmg"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.409066 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blmbl\" (UniqueName: \"kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl\") pod \"placement-db-create-2p5s8\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.442746 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-694hw"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.443948 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.451948 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-694hw"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.489028 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.489134 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6hw\" (UniqueName: \"kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.489995 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.497888 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-7jvwr"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.506704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6hw\" (UniqueName: \"kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw\") pod \"placement-1cd5-account-create-update-chlpx\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.550303 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9c77-account-create-update-4n55x"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.553023 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.554772 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.560212 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c77-account-create-update-4n55x"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.589897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.589977 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2ltx\" (UniqueName: \"kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.636304 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.693025 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2ltx\" (UniqueName: \"kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.693382 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.693461 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj88t\" (UniqueName: \"kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.693503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.694290 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.700987 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.713250 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2ltx\" (UniqueName: \"kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx\") pod \"glance-db-create-694hw\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.717710 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a41a9f-c626-4ced-b16b-334977932f11" path="/var/lib/kubelet/pods/04a41a9f-c626-4ced-b16b-334977932f11/volumes" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.796022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.796127 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj88t\" (UniqueName: \"kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.797044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.816972 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-9e7b-account-create-update-2xfb5"] Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.822059 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj88t\" (UniqueName: \"kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t\") pod \"glance-9c77-account-create-update-4n55x\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.882553 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-694hw" Jan 06 14:17:07 crc kubenswrapper[4869]: I0106 14:17:07.893216 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:08 crc kubenswrapper[4869]: W0106 14:17:08.014010 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84938b5_25e0_41d4_b97f_930d703f54e9.slice/crio-d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495 WatchSource:0}: Error finding container d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495: Status 404 returned error can't find the container with id d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495 Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.030881 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-mmg7w" podUID="aaa27703-fd83-40d0-a8fb-8d6962212f8f" containerName="ovn-controller" probeResult="failure" output=< Jan 06 14:17:08 crc kubenswrapper[4869]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 06 14:17:08 crc kubenswrapper[4869]: > Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.089036 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1cd5-account-create-update-chlpx"] Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.121994 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.125445 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-64n65" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.298937 4869 generic.go:334] "Generic (PLEG): container finished" podID="8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" containerID="e43d187ad82cbed7e0da120fc4fde25249907a8cc6ac648cf927ca96a74eb96e" exitCode=0 Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.299424 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7jvwr" event={"ID":"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed","Type":"ContainerDied","Data":"e43d187ad82cbed7e0da120fc4fde25249907a8cc6ac648cf927ca96a74eb96e"} Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.299577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7jvwr" event={"ID":"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed","Type":"ContainerStarted","Data":"e44b88e3e3373973600ffcbf49ab8ba9697c648da1f39a8be35a6102efab69cb"} Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.301417 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e7b-account-create-update-2xfb5" event={"ID":"e84938b5-25e0-41d4-b97f-930d703f54e9","Type":"ContainerStarted","Data":"d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495"} Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.303602 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1cd5-account-create-update-chlpx" event={"ID":"f7d8c02b-1c43-45e6-b42c-28b229c349be","Type":"ContainerStarted","Data":"4265ebf907b98d642c77e3d7c77569df335ad16ed937a2ebf3ec6d6b399ef798"} Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.317906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" event={"ID":"1e7566f7-7393-481e-bac9-db0f5c880b46","Type":"ContainerStarted","Data":"d64ebf53feecdc34a2fca950b056fbb93532bbd43e41cb4e164d56f31c298a0a"} Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.361127 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2p5s8"] Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.363680 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" podStartSLOduration=3.363618713 podStartE2EDuration="3.363618713s" podCreationTimestamp="2026-01-06 14:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:17:08.351543239 +0000 UTC m=+1046.891230903" watchObservedRunningTime="2026-01-06 14:17:08.363618713 +0000 UTC m=+1046.903306377" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.410744 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-mmg7w-config-qcpgl"] Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.412464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.419537 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.420869 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mmg7w-config-qcpgl"] Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.488765 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-694hw"] Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509191 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5j2\" (UniqueName: \"kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509241 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509356 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.509374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.541129 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9c77-account-create-update-4n55x"] Jan 06 14:17:08 crc kubenswrapper[4869]: W0106 14:17:08.549514 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7b17ac2_fe53_4c33_9cf0_3142a52dc576.slice/crio-73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9 WatchSource:0}: Error finding container 73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9: Status 404 returned error can't find the container with id 73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9 Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610440 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610460 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5j2\" (UniqueName: \"kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.610564 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.611169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.611242 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.611272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.611762 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.614330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.631373 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5j2\" (UniqueName: \"kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2\") pod \"ovn-controller-mmg7w-config-qcpgl\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:08 crc kubenswrapper[4869]: I0106 14:17:08.850408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.295604 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-mmg7w-config-qcpgl"] Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.326552 4869 generic.go:334] "Generic (PLEG): container finished" podID="e84938b5-25e0-41d4-b97f-930d703f54e9" containerID="6132bf5750c82f4376a8a5e267091c41ee4b713dd2b2e6ce5eed7b2e686b84ea" exitCode=0 Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.326622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e7b-account-create-update-2xfb5" event={"ID":"e84938b5-25e0-41d4-b97f-930d703f54e9","Type":"ContainerDied","Data":"6132bf5750c82f4376a8a5e267091c41ee4b713dd2b2e6ce5eed7b2e686b84ea"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.328011 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f47b080-a76c-4e14-bc35-6144be23522c" containerID="dafb7fc434f608e57d36a32efeb35d7b7799fcd2c32a62cde1342cb35702bfa0" exitCode=0 Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.328084 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-694hw" event={"ID":"8f47b080-a76c-4e14-bc35-6144be23522c","Type":"ContainerDied","Data":"dafb7fc434f608e57d36a32efeb35d7b7799fcd2c32a62cde1342cb35702bfa0"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.328110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-694hw" event={"ID":"8f47b080-a76c-4e14-bc35-6144be23522c","Type":"ContainerStarted","Data":"357ca2167c673bd9ce7adb8e8238928eeb0ae67c4dac12e3784084b51ef0f432"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.329187 4869 generic.go:334] "Generic (PLEG): container finished" podID="f7d8c02b-1c43-45e6-b42c-28b229c349be" containerID="b8fef457c44e99821bf3aecbd4c9c5499c3a7d49ad776bef5684808fbf6115ad" exitCode=0 Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.329242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1cd5-account-create-update-chlpx" event={"ID":"f7d8c02b-1c43-45e6-b42c-28b229c349be","Type":"ContainerDied","Data":"b8fef457c44e99821bf3aecbd4c9c5499c3a7d49ad776bef5684808fbf6115ad"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.330408 4869 generic.go:334] "Generic (PLEG): container finished" podID="b7b17ac2-fe53-4c33-9cf0-3142a52dc576" containerID="c6ac9d4abdc21b306e280d469f81737bfa0512e12d41888f8b5594a635716d9e" exitCode=0 Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.330463 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c77-account-create-update-4n55x" event={"ID":"b7b17ac2-fe53-4c33-9cf0-3142a52dc576","Type":"ContainerDied","Data":"c6ac9d4abdc21b306e280d469f81737bfa0512e12d41888f8b5594a635716d9e"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.330487 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c77-account-create-update-4n55x" event={"ID":"b7b17ac2-fe53-4c33-9cf0-3142a52dc576","Type":"ContainerStarted","Data":"73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.331422 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mmg7w-config-qcpgl" event={"ID":"e3db5340-07de-4174-902b-747c29a28f97","Type":"ContainerStarted","Data":"79106a20084e9f20cd99766311789ea65c05a9c526b07389d0229b8da8fa960c"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.332622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"025cc3c7-4b77-4fcf-b05e-4abde5125639","Type":"ContainerStarted","Data":"11a9a44e325951b8615f181a21c5cd24e1e64cd279cfc877fd37f800ee3569cc"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.332640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"025cc3c7-4b77-4fcf-b05e-4abde5125639","Type":"ContainerStarted","Data":"2118435d55ed3784a1bc6aae7972efd7d2bcce63c3b09ea129515886d29e5261"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.332914 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.333516 4869 generic.go:334] "Generic (PLEG): container finished" podID="9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" containerID="54a396271a91ca6e7b7bbda20013274012e26713c6c64fc565b461cc97e02461" exitCode=0 Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.333580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p5s8" event={"ID":"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf","Type":"ContainerDied","Data":"54a396271a91ca6e7b7bbda20013274012e26713c6c64fc565b461cc97e02461"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.333599 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p5s8" event={"ID":"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf","Type":"ContainerStarted","Data":"cbdaf0c47526aa2c848111d75b561703a55cc8c955c92a64542568c59c72774e"} Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.333869 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.398737 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.756045709 podStartE2EDuration="4.398715126s" podCreationTimestamp="2026-01-06 14:17:05 +0000 UTC" firstStartedPulling="2026-01-06 14:17:06.441511878 +0000 UTC m=+1044.981199542" lastFinishedPulling="2026-01-06 14:17:08.084181295 +0000 UTC m=+1046.623868959" observedRunningTime="2026-01-06 14:17:09.389965485 +0000 UTC m=+1047.929653169" watchObservedRunningTime="2026-01-06 14:17:09.398715126 +0000 UTC m=+1047.938402790" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.687195 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.732000 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdct7\" (UniqueName: \"kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7\") pod \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.732088 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts\") pod \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\" (UID: \"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed\") " Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.733003 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" (UID: "8e8d7b8f-1d96-4295-8fd6-954a19ecbbed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.739253 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7" (OuterVolumeSpecName: "kube-api-access-qdct7") pod "8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" (UID: "8e8d7b8f-1d96-4295-8fd6-954a19ecbbed"). InnerVolumeSpecName "kube-api-access-qdct7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.833982 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdct7\" (UniqueName: \"kubernetes.io/projected/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-kube-api-access-qdct7\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:09 crc kubenswrapper[4869]: I0106 14:17:09.834018 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.341381 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-7jvwr" event={"ID":"8e8d7b8f-1d96-4295-8fd6-954a19ecbbed","Type":"ContainerDied","Data":"e44b88e3e3373973600ffcbf49ab8ba9697c648da1f39a8be35a6102efab69cb"} Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.341713 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e44b88e3e3373973600ffcbf49ab8ba9697c648da1f39a8be35a6102efab69cb" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.342115 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-7jvwr" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.344155 4869 generic.go:334] "Generic (PLEG): container finished" podID="e3db5340-07de-4174-902b-747c29a28f97" containerID="84be22290d5b8f49531beba7ffe3725ef350e30bb3b6ceaaf9e2a33c055f6a51" exitCode=0 Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.344307 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mmg7w-config-qcpgl" event={"ID":"e3db5340-07de-4174-902b-747c29a28f97","Type":"ContainerDied","Data":"84be22290d5b8f49531beba7ffe3725ef350e30bb3b6ceaaf9e2a33c055f6a51"} Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.546297 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7cqhv"] Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.552965 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7cqhv"] Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.742498 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.850483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts\") pod \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.850702 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj88t\" (UniqueName: \"kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t\") pod \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\" (UID: \"b7b17ac2-fe53-4c33-9cf0-3142a52dc576\") " Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.851289 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7b17ac2-fe53-4c33-9cf0-3142a52dc576" (UID: "b7b17ac2-fe53-4c33-9cf0-3142a52dc576"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.856182 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t" (OuterVolumeSpecName: "kube-api-access-tj88t") pod "b7b17ac2-fe53-4c33-9cf0-3142a52dc576" (UID: "b7b17ac2-fe53-4c33-9cf0-3142a52dc576"). InnerVolumeSpecName "kube-api-access-tj88t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.931944 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.937824 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-694hw" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.945162 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.953221 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj88t\" (UniqueName: \"kubernetes.io/projected/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-kube-api-access-tj88t\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.953276 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7b17ac2-fe53-4c33-9cf0-3142a52dc576-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:10 crc kubenswrapper[4869]: I0106 14:17:10.964370 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054361 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57p9\" (UniqueName: \"kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9\") pod \"e84938b5-25e0-41d4-b97f-930d703f54e9\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054414 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts\") pod \"e84938b5-25e0-41d4-b97f-930d703f54e9\" (UID: \"e84938b5-25e0-41d4-b97f-930d703f54e9\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blmbl\" (UniqueName: \"kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl\") pod \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054767 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts\") pod \"8f47b080-a76c-4e14-bc35-6144be23522c\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054797 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2ltx\" (UniqueName: \"kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx\") pod \"8f47b080-a76c-4e14-bc35-6144be23522c\" (UID: \"8f47b080-a76c-4e14-bc35-6144be23522c\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054854 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts\") pod \"f7d8c02b-1c43-45e6-b42c-28b229c349be\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054887 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts\") pod \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\" (UID: \"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.054944 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r6hw\" (UniqueName: \"kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw\") pod \"f7d8c02b-1c43-45e6-b42c-28b229c349be\" (UID: \"f7d8c02b-1c43-45e6-b42c-28b229c349be\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.055129 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f47b080-a76c-4e14-bc35-6144be23522c" (UID: "8f47b080-a76c-4e14-bc35-6144be23522c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.055444 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" (UID: "9fe5915b-db4a-4fe4-9b6f-7bb930727ccf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.055564 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f47b080-a76c-4e14-bc35-6144be23522c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.055631 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e84938b5-25e0-41d4-b97f-930d703f54e9" (UID: "e84938b5-25e0-41d4-b97f-930d703f54e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.055924 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7d8c02b-1c43-45e6-b42c-28b229c349be" (UID: "f7d8c02b-1c43-45e6-b42c-28b229c349be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.058355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx" (OuterVolumeSpecName: "kube-api-access-j2ltx") pod "8f47b080-a76c-4e14-bc35-6144be23522c" (UID: "8f47b080-a76c-4e14-bc35-6144be23522c"). InnerVolumeSpecName "kube-api-access-j2ltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.059050 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw" (OuterVolumeSpecName: "kube-api-access-2r6hw") pod "f7d8c02b-1c43-45e6-b42c-28b229c349be" (UID: "f7d8c02b-1c43-45e6-b42c-28b229c349be"). InnerVolumeSpecName "kube-api-access-2r6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.061823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl" (OuterVolumeSpecName: "kube-api-access-blmbl") pod "9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" (UID: "9fe5915b-db4a-4fe4-9b6f-7bb930727ccf"). InnerVolumeSpecName "kube-api-access-blmbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.071841 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9" (OuterVolumeSpecName: "kube-api-access-v57p9") pod "e84938b5-25e0-41d4-b97f-930d703f54e9" (UID: "e84938b5-25e0-41d4-b97f-930d703f54e9"). InnerVolumeSpecName "kube-api-access-v57p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.157639 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blmbl\" (UniqueName: \"kubernetes.io/projected/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-kube-api-access-blmbl\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158079 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2ltx\" (UniqueName: \"kubernetes.io/projected/8f47b080-a76c-4e14-bc35-6144be23522c-kube-api-access-j2ltx\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158091 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7d8c02b-1c43-45e6-b42c-28b229c349be-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158100 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158110 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r6hw\" (UniqueName: \"kubernetes.io/projected/f7d8c02b-1c43-45e6-b42c-28b229c349be-kube-api-access-2r6hw\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158118 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57p9\" (UniqueName: \"kubernetes.io/projected/e84938b5-25e0-41d4-b97f-930d703f54e9-kube-api-access-v57p9\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.158126 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e84938b5-25e0-41d4-b97f-930d703f54e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.353641 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9c77-account-create-update-4n55x" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.353731 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9c77-account-create-update-4n55x" event={"ID":"b7b17ac2-fe53-4c33-9cf0-3142a52dc576","Type":"ContainerDied","Data":"73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9"} Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.353837 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f35c8e0fd4e5987464498a05dbdcd45ea2fc9f1adc364d24f4f011c1a537d9" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.357278 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2p5s8" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.357299 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2p5s8" event={"ID":"9fe5915b-db4a-4fe4-9b6f-7bb930727ccf","Type":"ContainerDied","Data":"cbdaf0c47526aa2c848111d75b561703a55cc8c955c92a64542568c59c72774e"} Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.357339 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbdaf0c47526aa2c848111d75b561703a55cc8c955c92a64542568c59c72774e" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.359040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-9e7b-account-create-update-2xfb5" event={"ID":"e84938b5-25e0-41d4-b97f-930d703f54e9","Type":"ContainerDied","Data":"d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495"} Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.359080 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9ae51375022ed35aeec34efab29bb6c8af79b65a36c24b1bf938d49b5a41495" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.359449 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-9e7b-account-create-update-2xfb5" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.362253 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-694hw" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.362252 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-694hw" event={"ID":"8f47b080-a76c-4e14-bc35-6144be23522c","Type":"ContainerDied","Data":"357ca2167c673bd9ce7adb8e8238928eeb0ae67c4dac12e3784084b51ef0f432"} Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.362474 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357ca2167c673bd9ce7adb8e8238928eeb0ae67c4dac12e3784084b51ef0f432" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.363887 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1cd5-account-create-update-chlpx" event={"ID":"f7d8c02b-1c43-45e6-b42c-28b229c349be","Type":"ContainerDied","Data":"4265ebf907b98d642c77e3d7c77569df335ad16ed937a2ebf3ec6d6b399ef798"} Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.363985 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4265ebf907b98d642c77e3d7c77569df335ad16ed937a2ebf3ec6d6b399ef798" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.363911 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1cd5-account-create-update-chlpx" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.646404 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.713783 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f6c22c-6057-432d-b208-7d80e8accda8" path="/var/lib/kubelet/pods/17f6c22c-6057-432d-b208-7d80e8accda8/volumes" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.766885 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l5j2\" (UniqueName: \"kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.767017 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.767044 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.767060 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.767093 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.767244 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn\") pod \"e3db5340-07de-4174-902b-747c29a28f97\" (UID: \"e3db5340-07de-4174-902b-747c29a28f97\") " Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.768488 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run" (OuterVolumeSpecName: "var-run") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.768544 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.768588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.769463 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.772755 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts" (OuterVolumeSpecName: "scripts") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.774650 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2" (OuterVolumeSpecName: "kube-api-access-2l5j2") pod "e3db5340-07de-4174-902b-747c29a28f97" (UID: "e3db5340-07de-4174-902b-747c29a28f97"). InnerVolumeSpecName "kube-api-access-2l5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869214 4869 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869242 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l5j2\" (UniqueName: \"kubernetes.io/projected/e3db5340-07de-4174-902b-747c29a28f97-kube-api-access-2l5j2\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869252 4869 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869262 4869 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-run\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869269 4869 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e3db5340-07de-4174-902b-747c29a28f97-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:11 crc kubenswrapper[4869]: I0106 14:17:11.869277 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3db5340-07de-4174-902b-747c29a28f97-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.374240 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-mmg7w-config-qcpgl" event={"ID":"e3db5340-07de-4174-902b-747c29a28f97","Type":"ContainerDied","Data":"79106a20084e9f20cd99766311789ea65c05a9c526b07389d0229b8da8fa960c"} Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.374609 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79106a20084e9f20cd99766311789ea65c05a9c526b07389d0229b8da8fa960c" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.374682 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-mmg7w-config-qcpgl" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.727813 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-mmg7w-config-qcpgl"] Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.732959 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-mmg7w-config-qcpgl"] Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738000 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-v7q9b"] Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738289 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84938b5-25e0-41d4-b97f-930d703f54e9" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738307 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84938b5-25e0-41d4-b97f-930d703f54e9" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738319 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3db5340-07de-4174-902b-747c29a28f97" containerName="ovn-config" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738327 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3db5340-07de-4174-902b-747c29a28f97" containerName="ovn-config" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738348 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7d8c02b-1c43-45e6-b42c-28b229c349be" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738355 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7d8c02b-1c43-45e6-b42c-28b229c349be" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738364 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b17ac2-fe53-4c33-9cf0-3142a52dc576" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738370 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b17ac2-fe53-4c33-9cf0-3142a52dc576" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738376 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738382 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738391 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f47b080-a76c-4e14-bc35-6144be23522c" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738397 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f47b080-a76c-4e14-bc35-6144be23522c" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: E0106 14:17:12.738409 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738415 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738543 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b17ac2-fe53-4c33-9cf0-3142a52dc576" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738559 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738567 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738577 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7d8c02b-1c43-45e6-b42c-28b229c349be" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738588 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84938b5-25e0-41d4-b97f-930d703f54e9" containerName="mariadb-account-create-update" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738596 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f47b080-a76c-4e14-bc35-6144be23522c" containerName="mariadb-database-create" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.738604 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3db5340-07de-4174-902b-747c29a28f97" containerName="ovn-config" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.739178 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.746639 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v7q9b"] Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.747550 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vl69r" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.747969 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.784781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.784847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbj7j\" (UniqueName: \"kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.784976 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.785021 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.886293 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.886385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.886413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbj7j\" (UniqueName: \"kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.886482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.890911 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.891922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.894338 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:12 crc kubenswrapper[4869]: I0106 14:17:12.904997 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbj7j\" (UniqueName: \"kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j\") pod \"glance-db-sync-v7q9b\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:13 crc kubenswrapper[4869]: I0106 14:17:13.045996 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-mmg7w" Jan 06 14:17:13 crc kubenswrapper[4869]: I0106 14:17:13.086895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:13 crc kubenswrapper[4869]: I0106 14:17:13.582157 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-v7q9b"] Jan 06 14:17:13 crc kubenswrapper[4869]: I0106 14:17:13.717323 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3db5340-07de-4174-902b-747c29a28f97" path="/var/lib/kubelet/pods/e3db5340-07de-4174-902b-747c29a28f97/volumes" Jan 06 14:17:14 crc kubenswrapper[4869]: I0106 14:17:14.402637 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7q9b" event={"ID":"8f215a91-c46f-447f-b277-362b4d419ed5","Type":"ContainerStarted","Data":"98b0f0801ff70b3b94c39018fdfec48206f293e27c0e5121a14a9a4a77632d64"} Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.566096 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-g8z25"] Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.567158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.569494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.574092 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g8z25"] Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.635684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrrxd\" (UniqueName: \"kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.635759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.737355 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrrxd\" (UniqueName: \"kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.737420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.738420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.786430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrrxd\" (UniqueName: \"kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd\") pod \"root-account-create-update-g8z25\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:15 crc kubenswrapper[4869]: I0106 14:17:15.904062 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.128850 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.238845 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.239073 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="dnsmasq-dns" containerID="cri-o://3005a014a00bf7cf97ef7a85ca51bf86ed12fdbb9d3f6ace639ab6297d41c8a1" gracePeriod=10 Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.433840 4869 generic.go:334] "Generic (PLEG): container finished" podID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerID="3005a014a00bf7cf97ef7a85ca51bf86ed12fdbb9d3f6ace639ab6297d41c8a1" exitCode=0 Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.434026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" event={"ID":"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94","Type":"ContainerDied","Data":"3005a014a00bf7cf97ef7a85ca51bf86ed12fdbb9d3f6ace639ab6297d41c8a1"} Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.537578 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-g8z25"] Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.701986 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.774076 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc\") pod \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.774635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config\") pod \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.774809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgsv\" (UniqueName: \"kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv\") pod \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\" (UID: \"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94\") " Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.796937 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv" (OuterVolumeSpecName: "kube-api-access-hbgsv") pod "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" (UID: "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94"). InnerVolumeSpecName "kube-api-access-hbgsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.850546 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" (UID: "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.852223 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config" (OuterVolumeSpecName: "config") pod "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" (UID: "7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.876911 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.876954 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:16 crc kubenswrapper[4869]: I0106 14:17:16.876967 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgsv\" (UniqueName: \"kubernetes.io/projected/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94-kube-api-access-hbgsv\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.445153 4869 generic.go:334] "Generic (PLEG): container finished" podID="136a4263-02a7-48bb-aace-502786258d44" containerID="310fbcb3336336496ef32b3268ae9dbf1d5744d85dcdf4d47be24a4461d51341" exitCode=0 Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.445207 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8z25" event={"ID":"136a4263-02a7-48bb-aace-502786258d44","Type":"ContainerDied","Data":"310fbcb3336336496ef32b3268ae9dbf1d5744d85dcdf4d47be24a4461d51341"} Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.445284 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8z25" event={"ID":"136a4263-02a7-48bb-aace-502786258d44","Type":"ContainerStarted","Data":"84e2f96ca124353c3aaa197500c5533fd2d81b7a1d6d5e9eb69f6623dbf500c9"} Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.448037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" event={"ID":"7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94","Type":"ContainerDied","Data":"8d54e79d87284a979ff9218ea3593ec2b05fa4ad9fee9e3bedf393ef0dd395ee"} Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.448085 4869 scope.go:117] "RemoveContainer" containerID="3005a014a00bf7cf97ef7a85ca51bf86ed12fdbb9d3f6ace639ab6297d41c8a1" Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.448095 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-cgzgv" Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.471405 4869 scope.go:117] "RemoveContainer" containerID="ebaa8c1959da9cc3c5146e161cde0de32ac60422d36616024857ecfb58a0ff1a" Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.494987 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.502002 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-cgzgv"] Jan 06 14:17:17 crc kubenswrapper[4869]: I0106 14:17:17.724741 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" path="/var/lib/kubelet/pods/7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94/volumes" Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.785473 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.812658 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts\") pod \"136a4263-02a7-48bb-aace-502786258d44\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.812776 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrrxd\" (UniqueName: \"kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd\") pod \"136a4263-02a7-48bb-aace-502786258d44\" (UID: \"136a4263-02a7-48bb-aace-502786258d44\") " Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.813780 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "136a4263-02a7-48bb-aace-502786258d44" (UID: "136a4263-02a7-48bb-aace-502786258d44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.828076 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd" (OuterVolumeSpecName: "kube-api-access-zrrxd") pod "136a4263-02a7-48bb-aace-502786258d44" (UID: "136a4263-02a7-48bb-aace-502786258d44"). InnerVolumeSpecName "kube-api-access-zrrxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.914110 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/136a4263-02a7-48bb-aace-502786258d44-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:18 crc kubenswrapper[4869]: I0106 14:17:18.914156 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrrxd\" (UniqueName: \"kubernetes.io/projected/136a4263-02a7-48bb-aace-502786258d44-kube-api-access-zrrxd\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:19 crc kubenswrapper[4869]: I0106 14:17:19.468557 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-g8z25" event={"ID":"136a4263-02a7-48bb-aace-502786258d44","Type":"ContainerDied","Data":"84e2f96ca124353c3aaa197500c5533fd2d81b7a1d6d5e9eb69f6623dbf500c9"} Jan 06 14:17:19 crc kubenswrapper[4869]: I0106 14:17:19.468608 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e2f96ca124353c3aaa197500c5533fd2d81b7a1d6d5e9eb69f6623dbf500c9" Jan 06 14:17:19 crc kubenswrapper[4869]: I0106 14:17:19.468698 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-g8z25" Jan 06 14:17:21 crc kubenswrapper[4869]: I0106 14:17:21.009176 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 06 14:17:23 crc kubenswrapper[4869]: I0106 14:17:23.499267 4869 generic.go:334] "Generic (PLEG): container finished" podID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerID="426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647" exitCode=0 Jan 06 14:17:23 crc kubenswrapper[4869]: I0106 14:17:23.499361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerDied","Data":"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647"} Jan 06 14:17:23 crc kubenswrapper[4869]: I0106 14:17:23.502878 4869 generic.go:334] "Generic (PLEG): container finished" podID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerID="b4f041e0f9531bfc6e45c8260345558c6a4ae855d64eb9a899f864051195a0a2" exitCode=0 Jan 06 14:17:23 crc kubenswrapper[4869]: I0106 14:17:23.502927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerDied","Data":"b4f041e0f9531bfc6e45c8260345558c6a4ae855d64eb9a899f864051195a0a2"} Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.558912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7q9b" event={"ID":"8f215a91-c46f-447f-b277-362b4d419ed5","Type":"ContainerStarted","Data":"403154abd9d50451f7632fb702543fb1a38aee0d3e6c89082ce9926fd905d6b5"} Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.565584 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerStarted","Data":"fff04b7bdaac883fa42e70a3f4f3479f087735d8328b74bd9aec6635e624421d"} Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.566423 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.570349 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerStarted","Data":"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4"} Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.570862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.587299 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-v7q9b" podStartSLOduration=2.2793058090000002 podStartE2EDuration="15.587279047s" podCreationTimestamp="2026-01-06 14:17:12 +0000 UTC" firstStartedPulling="2026-01-06 14:17:13.588771615 +0000 UTC m=+1052.128459279" lastFinishedPulling="2026-01-06 14:17:26.896744853 +0000 UTC m=+1065.436432517" observedRunningTime="2026-01-06 14:17:27.581009179 +0000 UTC m=+1066.120696843" watchObservedRunningTime="2026-01-06 14:17:27.587279047 +0000 UTC m=+1066.126966711" Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.619634 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.656415697 podStartE2EDuration="1m35.619612051s" podCreationTimestamp="2026-01-06 14:15:52 +0000 UTC" firstStartedPulling="2026-01-06 14:15:54.705873784 +0000 UTC m=+973.245561448" lastFinishedPulling="2026-01-06 14:16:49.669070138 +0000 UTC m=+1028.208757802" observedRunningTime="2026-01-06 14:17:27.614602655 +0000 UTC m=+1066.154290329" watchObservedRunningTime="2026-01-06 14:17:27.619612051 +0000 UTC m=+1066.159299715" Jan 06 14:17:27 crc kubenswrapper[4869]: I0106 14:17:27.642454 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.332732015 podStartE2EDuration="1m35.642434706s" podCreationTimestamp="2026-01-06 14:15:52 +0000 UTC" firstStartedPulling="2026-01-06 14:15:54.35950825 +0000 UTC m=+972.899195914" lastFinishedPulling="2026-01-06 14:16:49.669210931 +0000 UTC m=+1028.208898605" observedRunningTime="2026-01-06 14:17:27.637273286 +0000 UTC m=+1066.176960980" watchObservedRunningTime="2026-01-06 14:17:27.642434706 +0000 UTC m=+1066.182122360" Jan 06 14:17:33 crc kubenswrapper[4869]: I0106 14:17:33.621268 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f215a91-c46f-447f-b277-362b4d419ed5" containerID="403154abd9d50451f7632fb702543fb1a38aee0d3e6c89082ce9926fd905d6b5" exitCode=0 Jan 06 14:17:33 crc kubenswrapper[4869]: I0106 14:17:33.621358 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7q9b" event={"ID":"8f215a91-c46f-447f-b277-362b4d419ed5","Type":"ContainerDied","Data":"403154abd9d50451f7632fb702543fb1a38aee0d3e6c89082ce9926fd905d6b5"} Jan 06 14:17:33 crc kubenswrapper[4869]: I0106 14:17:33.622380 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:17:33 crc kubenswrapper[4869]: I0106 14:17:33.622448 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.098904 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.214522 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data\") pod \"8f215a91-c46f-447f-b277-362b4d419ed5\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.214737 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle\") pod \"8f215a91-c46f-447f-b277-362b4d419ed5\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.214804 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbj7j\" (UniqueName: \"kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j\") pod \"8f215a91-c46f-447f-b277-362b4d419ed5\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.214860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data\") pod \"8f215a91-c46f-447f-b277-362b4d419ed5\" (UID: \"8f215a91-c46f-447f-b277-362b4d419ed5\") " Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.220962 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j" (OuterVolumeSpecName: "kube-api-access-nbj7j") pod "8f215a91-c46f-447f-b277-362b4d419ed5" (UID: "8f215a91-c46f-447f-b277-362b4d419ed5"). InnerVolumeSpecName "kube-api-access-nbj7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.222079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8f215a91-c46f-447f-b277-362b4d419ed5" (UID: "8f215a91-c46f-447f-b277-362b4d419ed5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.271928 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data" (OuterVolumeSpecName: "config-data") pod "8f215a91-c46f-447f-b277-362b4d419ed5" (UID: "8f215a91-c46f-447f-b277-362b4d419ed5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.295311 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f215a91-c46f-447f-b277-362b4d419ed5" (UID: "8f215a91-c46f-447f-b277-362b4d419ed5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.316691 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.316755 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbj7j\" (UniqueName: \"kubernetes.io/projected/8f215a91-c46f-447f-b277-362b4d419ed5-kube-api-access-nbj7j\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.316774 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.316791 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8f215a91-c46f-447f-b277-362b4d419ed5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.640674 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-v7q9b" event={"ID":"8f215a91-c46f-447f-b277-362b4d419ed5","Type":"ContainerDied","Data":"98b0f0801ff70b3b94c39018fdfec48206f293e27c0e5121a14a9a4a77632d64"} Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.640716 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98b0f0801ff70b3b94c39018fdfec48206f293e27c0e5121a14a9a4a77632d64" Jan 06 14:17:35 crc kubenswrapper[4869]: I0106 14:17:35.640740 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-v7q9b" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125042 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:17:36 crc kubenswrapper[4869]: E0106 14:17:36.125591 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="dnsmasq-dns" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125604 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="dnsmasq-dns" Jan 06 14:17:36 crc kubenswrapper[4869]: E0106 14:17:36.125628 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="136a4263-02a7-48bb-aace-502786258d44" containerName="mariadb-account-create-update" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125634 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="136a4263-02a7-48bb-aace-502786258d44" containerName="mariadb-account-create-update" Jan 06 14:17:36 crc kubenswrapper[4869]: E0106 14:17:36.125646 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="init" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125652 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="init" Jan 06 14:17:36 crc kubenswrapper[4869]: E0106 14:17:36.125751 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f215a91-c46f-447f-b277-362b4d419ed5" containerName="glance-db-sync" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125760 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f215a91-c46f-447f-b277-362b4d419ed5" containerName="glance-db-sync" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125887 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7099ee43-fb43-4a4c-b8d9-4a9c0ee2fc94" containerName="dnsmasq-dns" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125900 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="136a4263-02a7-48bb-aace-502786258d44" containerName="mariadb-account-create-update" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.125908 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f215a91-c46f-447f-b277-362b4d419ed5" containerName="glance-db-sync" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.126645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.157790 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.235466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.235556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45q2\" (UniqueName: \"kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.235582 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.235615 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.235691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.337989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.338110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.338161 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45q2\" (UniqueName: \"kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.338198 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.338251 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.339471 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.339504 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.340149 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.342105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.366787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45q2\" (UniqueName: \"kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2\") pod \"dnsmasq-dns-54f9b7b8d9-rmgvq\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.447985 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:36 crc kubenswrapper[4869]: I0106 14:17:36.942086 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:17:37 crc kubenswrapper[4869]: I0106 14:17:37.660259 4869 generic.go:334] "Generic (PLEG): container finished" podID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerID="dc717207afca3f511272fe5e4c16c0812468e7c48e1e00014c59aa3f02663b25" exitCode=0 Jan 06 14:17:37 crc kubenswrapper[4869]: I0106 14:17:37.660347 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" event={"ID":"50dff5bf-77a4-43b5-aad5-b621313b4dca","Type":"ContainerDied","Data":"dc717207afca3f511272fe5e4c16c0812468e7c48e1e00014c59aa3f02663b25"} Jan 06 14:17:37 crc kubenswrapper[4869]: I0106 14:17:37.660509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" event={"ID":"50dff5bf-77a4-43b5-aad5-b621313b4dca","Type":"ContainerStarted","Data":"4bf764c7f05b957b97048158578bd4243578e3879557858364d86d069f6e84a5"} Jan 06 14:17:38 crc kubenswrapper[4869]: I0106 14:17:38.669457 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" event={"ID":"50dff5bf-77a4-43b5-aad5-b621313b4dca","Type":"ContainerStarted","Data":"7e3736c2d1c7509f63bc9396be06667964a0128d89a2da201618646e1e344e51"} Jan 06 14:17:38 crc kubenswrapper[4869]: I0106 14:17:38.670235 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:38 crc kubenswrapper[4869]: I0106 14:17:38.693973 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" podStartSLOduration=2.693954326 podStartE2EDuration="2.693954326s" podCreationTimestamp="2026-01-06 14:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:17:38.693006742 +0000 UTC m=+1077.232694426" watchObservedRunningTime="2026-01-06 14:17:38.693954326 +0000 UTC m=+1077.233641990" Jan 06 14:17:43 crc kubenswrapper[4869]: I0106 14:17:43.752848 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.051973 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-85w6r"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.053052 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.069920 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-85w6r"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.146867 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.163473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.163542 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjzv\" (UniqueName: \"kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.166175 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2hhw5"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.167423 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.175039 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-0d3a-account-create-update-ht79k"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.176229 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.183818 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.193397 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2hhw5"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.227478 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d3a-account-create-update-ht79k"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.264940 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.265007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.265048 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvjzv\" (UniqueName: \"kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.265079 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjkhr\" (UniqueName: \"kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.265136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqhd\" (UniqueName: \"kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.265208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.266331 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.276943 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-d360-account-create-update-mj76v"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.278087 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.281337 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.306828 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d360-account-create-update-mj76v"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.315948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvjzv\" (UniqueName: \"kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv\") pod \"cinder-db-create-85w6r\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.363542 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xzm5r"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.364601 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367074 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367165 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367195 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdcw\" (UniqueName: \"kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjkhr\" (UniqueName: \"kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367308 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqhd\" (UniqueName: \"kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367352 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtzj6\" (UniqueName: \"kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.367420 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.368260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.368290 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.379228 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzm5r"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.393611 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjkhr\" (UniqueName: \"kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr\") pod \"barbican-db-create-2hhw5\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.393771 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqhd\" (UniqueName: \"kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd\") pod \"cinder-0d3a-account-create-update-ht79k\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.404545 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.470878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtzj6\" (UniqueName: \"kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.471210 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.471231 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdcw\" (UniqueName: \"kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.471290 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.472040 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.472382 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.487289 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.491745 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-k4szp"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.492723 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.494429 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zj5p" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.502972 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.503982 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.506568 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.507291 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.510553 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdcw\" (UniqueName: \"kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw\") pod \"neutron-db-create-xzm5r\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.522232 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtzj6\" (UniqueName: \"kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6\") pod \"barbican-d360-account-create-update-mj76v\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.557752 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-454e-account-create-update-kvssd"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.558921 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.562433 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.574226 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k4szp"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.584902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-454e-account-create-update-kvssd"] Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.645144 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.674174 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b89d4\" (UniqueName: \"kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.674208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.674334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwtnc\" (UniqueName: \"kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.674377 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.674467 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.681341 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.777297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.777684 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.777713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b89d4\" (UniqueName: \"kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.777729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.777787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwtnc\" (UniqueName: \"kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.779073 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.785856 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.786356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.799772 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwtnc\" (UniqueName: \"kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc\") pod \"neutron-454e-account-create-update-kvssd\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.808427 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b89d4\" (UniqueName: \"kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4\") pod \"keystone-db-sync-k4szp\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.869519 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.882177 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:44 crc kubenswrapper[4869]: I0106 14:17:44.987408 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2hhw5"] Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.007809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-85w6r"] Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.020691 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-0d3a-account-create-update-ht79k"] Jan 06 14:17:45 crc kubenswrapper[4869]: W0106 14:17:45.096128 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc74a40f5_6fe0_406f_bf62_6d643e7f7f22.slice/crio-395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3 WatchSource:0}: Error finding container 395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3: Status 404 returned error can't find the container with id 395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3 Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.149747 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-d360-account-create-update-mj76v"] Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.498309 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-454e-account-create-update-kvssd"] Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.506630 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k4szp"] Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.615436 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xzm5r"] Jan 06 14:17:45 crc kubenswrapper[4869]: W0106 14:17:45.653776 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e6602c6_cd27_4d18_91d8_47d0eb285a52.slice/crio-c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd WatchSource:0}: Error finding container c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd: Status 404 returned error can't find the container with id c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.735779 4869 generic.go:334] "Generic (PLEG): container finished" podID="42ae2e09-f75f-4bb9-927d-6b0aba81872f" containerID="230d35f321a66e06386c61ba26dc9521f1cc90936ab181595a41774657300e0a" exitCode=0 Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.735868 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85w6r" event={"ID":"42ae2e09-f75f-4bb9-927d-6b0aba81872f","Type":"ContainerDied","Data":"230d35f321a66e06386c61ba26dc9521f1cc90936ab181595a41774657300e0a"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.735924 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85w6r" event={"ID":"42ae2e09-f75f-4bb9-927d-6b0aba81872f","Type":"ContainerStarted","Data":"d09491a8e602e3c5d880a4adb90d6cfea10f7a1ab9fccd1bdaf2175e1dcf4959"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.737550 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzm5r" event={"ID":"6e6602c6-cd27-4d18-91d8-47d0eb285a52","Type":"ContainerStarted","Data":"c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.746552 4869 generic.go:334] "Generic (PLEG): container finished" podID="d6589777-9306-4d6c-9c5a-ae0961448cb9" containerID="4bc873c18e0717ac68ad8de97820d45dc0ec7156feeac4e0aef3f75cd60949f7" exitCode=0 Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.746624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d360-account-create-update-mj76v" event={"ID":"d6589777-9306-4d6c-9c5a-ae0961448cb9","Type":"ContainerDied","Data":"4bc873c18e0717ac68ad8de97820d45dc0ec7156feeac4e0aef3f75cd60949f7"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.746647 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d360-account-create-update-mj76v" event={"ID":"d6589777-9306-4d6c-9c5a-ae0961448cb9","Type":"ContainerStarted","Data":"7263a56b5fb6e0bab1a56ed598d9c6bed881c49f3a7f0159b7eca0136924e44f"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.752608 4869 generic.go:334] "Generic (PLEG): container finished" podID="c74a40f5-6fe0-406f-bf62-6d643e7f7f22" containerID="229d41613f2748f977f52a4d2b9ef2229decbbb63605642b3b2df09f54ffed8d" exitCode=0 Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.752818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d3a-account-create-update-ht79k" event={"ID":"c74a40f5-6fe0-406f-bf62-6d643e7f7f22","Type":"ContainerDied","Data":"229d41613f2748f977f52a4d2b9ef2229decbbb63605642b3b2df09f54ffed8d"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.752843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d3a-account-create-update-ht79k" event={"ID":"c74a40f5-6fe0-406f-bf62-6d643e7f7f22","Type":"ContainerStarted","Data":"395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.754949 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4szp" event={"ID":"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23","Type":"ContainerStarted","Data":"739b952fb7f7618fdc9ff8328f701801cf565d917502908bb70d0f56d6918aa8"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.756004 4869 generic.go:334] "Generic (PLEG): container finished" podID="fc9cd7d4-55b8-4008-9b37-040142576d79" containerID="592690da299cf60315705702f7cb4dde9dd7b6ba626ada6033397602ad2f141a" exitCode=0 Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.756089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2hhw5" event={"ID":"fc9cd7d4-55b8-4008-9b37-040142576d79","Type":"ContainerDied","Data":"592690da299cf60315705702f7cb4dde9dd7b6ba626ada6033397602ad2f141a"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.756111 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2hhw5" event={"ID":"fc9cd7d4-55b8-4008-9b37-040142576d79","Type":"ContainerStarted","Data":"069661069a675be8c5bb81a3eb8b37740953df714ff4ffafd777ac09d044099a"} Jan 06 14:17:45 crc kubenswrapper[4869]: I0106 14:17:45.759567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-454e-account-create-update-kvssd" event={"ID":"1bc8c590-2b3d-47a0-ada1-029b0d12210d","Type":"ContainerStarted","Data":"7a78d3f8007208c95b7a3ea6ed8790c58a4d69c7535950042c7edc89b13ffd95"} Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.449870 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.510843 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.511197 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="dnsmasq-dns" containerID="cri-o://d64ebf53feecdc34a2fca950b056fbb93532bbd43e41cb4e164d56f31c298a0a" gracePeriod=10 Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.775116 4869 generic.go:334] "Generic (PLEG): container finished" podID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerID="d64ebf53feecdc34a2fca950b056fbb93532bbd43e41cb4e164d56f31c298a0a" exitCode=0 Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.775204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" event={"ID":"1e7566f7-7393-481e-bac9-db0f5c880b46","Type":"ContainerDied","Data":"d64ebf53feecdc34a2fca950b056fbb93532bbd43e41cb4e164d56f31c298a0a"} Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.779001 4869 generic.go:334] "Generic (PLEG): container finished" podID="1bc8c590-2b3d-47a0-ada1-029b0d12210d" containerID="f92e742ecc0cd2df0c935dfec5aaa06b33fac16255dd5c592538254db7f66cc9" exitCode=0 Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.779085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-454e-account-create-update-kvssd" event={"ID":"1bc8c590-2b3d-47a0-ada1-029b0d12210d","Type":"ContainerDied","Data":"f92e742ecc0cd2df0c935dfec5aaa06b33fac16255dd5c592538254db7f66cc9"} Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.805218 4869 generic.go:334] "Generic (PLEG): container finished" podID="6e6602c6-cd27-4d18-91d8-47d0eb285a52" containerID="85ea616470a2cded040cc3cc651c200b3daee0e6fe3688dbacef53871f1896c7" exitCode=0 Jan 06 14:17:46 crc kubenswrapper[4869]: I0106 14:17:46.805488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzm5r" event={"ID":"6e6602c6-cd27-4d18-91d8-47d0eb285a52","Type":"ContainerDied","Data":"85ea616470a2cded040cc3cc651c200b3daee0e6fe3688dbacef53871f1896c7"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.030161 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.150203 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb\") pod \"1e7566f7-7393-481e-bac9-db0f5c880b46\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.150247 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc\") pod \"1e7566f7-7393-481e-bac9-db0f5c880b46\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.150270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljcxx\" (UniqueName: \"kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx\") pod \"1e7566f7-7393-481e-bac9-db0f5c880b46\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.150303 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb\") pod \"1e7566f7-7393-481e-bac9-db0f5c880b46\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.150383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config\") pod \"1e7566f7-7393-481e-bac9-db0f5c880b46\" (UID: \"1e7566f7-7393-481e-bac9-db0f5c880b46\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.166694 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx" (OuterVolumeSpecName: "kube-api-access-ljcxx") pod "1e7566f7-7393-481e-bac9-db0f5c880b46" (UID: "1e7566f7-7393-481e-bac9-db0f5c880b46"). InnerVolumeSpecName "kube-api-access-ljcxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.197845 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.200690 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e7566f7-7393-481e-bac9-db0f5c880b46" (UID: "1e7566f7-7393-481e-bac9-db0f5c880b46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.220908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e7566f7-7393-481e-bac9-db0f5c880b46" (UID: "1e7566f7-7393-481e-bac9-db0f5c880b46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.222094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e7566f7-7393-481e-bac9-db0f5c880b46" (UID: "1e7566f7-7393-481e-bac9-db0f5c880b46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.229773 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config" (OuterVolumeSpecName: "config") pod "1e7566f7-7393-481e-bac9-db0f5c880b46" (UID: "1e7566f7-7393-481e-bac9-db0f5c880b46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.251918 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmqhd\" (UniqueName: \"kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd\") pod \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252043 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts\") pod \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\" (UID: \"c74a40f5-6fe0-406f-bf62-6d643e7f7f22\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252483 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252496 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252506 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252514 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljcxx\" (UniqueName: \"kubernetes.io/projected/1e7566f7-7393-481e-bac9-db0f5c880b46-kube-api-access-ljcxx\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.252522 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e7566f7-7393-481e-bac9-db0f5c880b46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.253658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c74a40f5-6fe0-406f-bf62-6d643e7f7f22" (UID: "c74a40f5-6fe0-406f-bf62-6d643e7f7f22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.256462 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd" (OuterVolumeSpecName: "kube-api-access-kmqhd") pod "c74a40f5-6fe0-406f-bf62-6d643e7f7f22" (UID: "c74a40f5-6fe0-406f-bf62-6d643e7f7f22"). InnerVolumeSpecName "kube-api-access-kmqhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.353761 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmqhd\" (UniqueName: \"kubernetes.io/projected/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-kube-api-access-kmqhd\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.353800 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74a40f5-6fe0-406f-bf62-6d643e7f7f22-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.432819 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.442376 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.451361 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvjzv\" (UniqueName: \"kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv\") pod \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556320 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjkhr\" (UniqueName: \"kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr\") pod \"fc9cd7d4-55b8-4008-9b37-040142576d79\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556367 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts\") pod \"d6589777-9306-4d6c-9c5a-ae0961448cb9\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556402 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtzj6\" (UniqueName: \"kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6\") pod \"d6589777-9306-4d6c-9c5a-ae0961448cb9\" (UID: \"d6589777-9306-4d6c-9c5a-ae0961448cb9\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556543 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts\") pod \"fc9cd7d4-55b8-4008-9b37-040142576d79\" (UID: \"fc9cd7d4-55b8-4008-9b37-040142576d79\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.556637 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts\") pod \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\" (UID: \"42ae2e09-f75f-4bb9-927d-6b0aba81872f\") " Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.557078 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc9cd7d4-55b8-4008-9b37-040142576d79" (UID: "fc9cd7d4-55b8-4008-9b37-040142576d79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.557087 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6589777-9306-4d6c-9c5a-ae0961448cb9" (UID: "d6589777-9306-4d6c-9c5a-ae0961448cb9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.557191 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42ae2e09-f75f-4bb9-927d-6b0aba81872f" (UID: "42ae2e09-f75f-4bb9-927d-6b0aba81872f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.559584 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr" (OuterVolumeSpecName: "kube-api-access-rjkhr") pod "fc9cd7d4-55b8-4008-9b37-040142576d79" (UID: "fc9cd7d4-55b8-4008-9b37-040142576d79"). InnerVolumeSpecName "kube-api-access-rjkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.560077 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6" (OuterVolumeSpecName: "kube-api-access-vtzj6") pod "d6589777-9306-4d6c-9c5a-ae0961448cb9" (UID: "d6589777-9306-4d6c-9c5a-ae0961448cb9"). InnerVolumeSpecName "kube-api-access-vtzj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.576980 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv" (OuterVolumeSpecName: "kube-api-access-qvjzv") pod "42ae2e09-f75f-4bb9-927d-6b0aba81872f" (UID: "42ae2e09-f75f-4bb9-927d-6b0aba81872f"). InnerVolumeSpecName "kube-api-access-qvjzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658116 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc9cd7d4-55b8-4008-9b37-040142576d79-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658148 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42ae2e09-f75f-4bb9-927d-6b0aba81872f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658157 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvjzv\" (UniqueName: \"kubernetes.io/projected/42ae2e09-f75f-4bb9-927d-6b0aba81872f-kube-api-access-qvjzv\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658173 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjkhr\" (UniqueName: \"kubernetes.io/projected/fc9cd7d4-55b8-4008-9b37-040142576d79-kube-api-access-rjkhr\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658185 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6589777-9306-4d6c-9c5a-ae0961448cb9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.658198 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtzj6\" (UniqueName: \"kubernetes.io/projected/d6589777-9306-4d6c-9c5a-ae0961448cb9-kube-api-access-vtzj6\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.816091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-0d3a-account-create-update-ht79k" event={"ID":"c74a40f5-6fe0-406f-bf62-6d643e7f7f22","Type":"ContainerDied","Data":"395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.816133 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="395b0b2212f96408c661e4aff6709a331e47f3e60a26fab00dbee01a89e241e3" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.816188 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-0d3a-account-create-update-ht79k" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.818007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2hhw5" event={"ID":"fc9cd7d4-55b8-4008-9b37-040142576d79","Type":"ContainerDied","Data":"069661069a675be8c5bb81a3eb8b37740953df714ff4ffafd777ac09d044099a"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.818031 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069661069a675be8c5bb81a3eb8b37740953df714ff4ffafd777ac09d044099a" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.818070 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2hhw5" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.823515 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-85w6r" event={"ID":"42ae2e09-f75f-4bb9-927d-6b0aba81872f","Type":"ContainerDied","Data":"d09491a8e602e3c5d880a4adb90d6cfea10f7a1ab9fccd1bdaf2175e1dcf4959"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.823553 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09491a8e602e3c5d880a4adb90d6cfea10f7a1ab9fccd1bdaf2175e1dcf4959" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.823602 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-85w6r" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.829934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-d360-account-create-update-mj76v" event={"ID":"d6589777-9306-4d6c-9c5a-ae0961448cb9","Type":"ContainerDied","Data":"7263a56b5fb6e0bab1a56ed598d9c6bed881c49f3a7f0159b7eca0136924e44f"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.829978 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7263a56b5fb6e0bab1a56ed598d9c6bed881c49f3a7f0159b7eca0136924e44f" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.830085 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-d360-account-create-update-mj76v" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.851592 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.851388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-92ks9" event={"ID":"1e7566f7-7393-481e-bac9-db0f5c880b46","Type":"ContainerDied","Data":"ad11b1a2b80aa69c4dd0aca0361ab3453ebd5123361f3dfe5d87b000d2ef73c0"} Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.853351 4869 scope.go:117] "RemoveContainer" containerID="d64ebf53feecdc34a2fca950b056fbb93532bbd43e41cb4e164d56f31c298a0a" Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.886746 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.894264 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-92ks9"] Jan 06 14:17:47 crc kubenswrapper[4869]: I0106 14:17:47.908299 4869 scope.go:117] "RemoveContainer" containerID="21ef037692ad5f98a490a550a7e2853e65723a4761dea31c7300997ebed79b02" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.179747 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.264145 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.272696 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts\") pod \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.272834 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdcw\" (UniqueName: \"kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw\") pod \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\" (UID: \"6e6602c6-cd27-4d18-91d8-47d0eb285a52\") " Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.274837 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e6602c6-cd27-4d18-91d8-47d0eb285a52" (UID: "6e6602c6-cd27-4d18-91d8-47d0eb285a52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.284622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw" (OuterVolumeSpecName: "kube-api-access-gpdcw") pod "6e6602c6-cd27-4d18-91d8-47d0eb285a52" (UID: "6e6602c6-cd27-4d18-91d8-47d0eb285a52"). InnerVolumeSpecName "kube-api-access-gpdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.374053 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwtnc\" (UniqueName: \"kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc\") pod \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.374196 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts\") pod \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\" (UID: \"1bc8c590-2b3d-47a0-ada1-029b0d12210d\") " Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.374499 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e6602c6-cd27-4d18-91d8-47d0eb285a52-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.374511 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpdcw\" (UniqueName: \"kubernetes.io/projected/6e6602c6-cd27-4d18-91d8-47d0eb285a52-kube-api-access-gpdcw\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.374809 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bc8c590-2b3d-47a0-ada1-029b0d12210d" (UID: "1bc8c590-2b3d-47a0-ada1-029b0d12210d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.379174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc" (OuterVolumeSpecName: "kube-api-access-lwtnc") pod "1bc8c590-2b3d-47a0-ada1-029b0d12210d" (UID: "1bc8c590-2b3d-47a0-ada1-029b0d12210d"). InnerVolumeSpecName "kube-api-access-lwtnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.483151 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bc8c590-2b3d-47a0-ada1-029b0d12210d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.483195 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwtnc\" (UniqueName: \"kubernetes.io/projected/1bc8c590-2b3d-47a0-ada1-029b0d12210d-kube-api-access-lwtnc\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.865405 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xzm5r" event={"ID":"6e6602c6-cd27-4d18-91d8-47d0eb285a52","Type":"ContainerDied","Data":"c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd"} Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.865689 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7873b5d240f55305729bab9ba9a1443b26ce8fd66538be15449cdc8428d36bd" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.865454 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xzm5r" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.871242 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-454e-account-create-update-kvssd" event={"ID":"1bc8c590-2b3d-47a0-ada1-029b0d12210d","Type":"ContainerDied","Data":"7a78d3f8007208c95b7a3ea6ed8790c58a4d69c7535950042c7edc89b13ffd95"} Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.871280 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a78d3f8007208c95b7a3ea6ed8790c58a4d69c7535950042c7edc89b13ffd95" Jan 06 14:17:48 crc kubenswrapper[4869]: I0106 14:17:48.871360 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-454e-account-create-update-kvssd" Jan 06 14:17:49 crc kubenswrapper[4869]: I0106 14:17:49.714795 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" path="/var/lib/kubelet/pods/1e7566f7-7393-481e-bac9-db0f5c880b46/volumes" Jan 06 14:17:51 crc kubenswrapper[4869]: I0106 14:17:51.901292 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4szp" event={"ID":"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23","Type":"ContainerStarted","Data":"508bae0d5905ac1881be1320d226acd6eaba11ee012e6b14f7aeef1db91afa65"} Jan 06 14:17:51 crc kubenswrapper[4869]: I0106 14:17:51.925391 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-k4szp" podStartSLOduration=2.084530893 podStartE2EDuration="7.925367324s" podCreationTimestamp="2026-01-06 14:17:44 +0000 UTC" firstStartedPulling="2026-01-06 14:17:45.510392571 +0000 UTC m=+1084.050080235" lastFinishedPulling="2026-01-06 14:17:51.351228982 +0000 UTC m=+1089.890916666" observedRunningTime="2026-01-06 14:17:51.920710816 +0000 UTC m=+1090.460398480" watchObservedRunningTime="2026-01-06 14:17:51.925367324 +0000 UTC m=+1090.465055008" Jan 06 14:17:56 crc kubenswrapper[4869]: I0106 14:17:56.937892 4869 generic.go:334] "Generic (PLEG): container finished" podID="2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" containerID="508bae0d5905ac1881be1320d226acd6eaba11ee012e6b14f7aeef1db91afa65" exitCode=0 Jan 06 14:17:56 crc kubenswrapper[4869]: I0106 14:17:56.937951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4szp" event={"ID":"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23","Type":"ContainerDied","Data":"508bae0d5905ac1881be1320d226acd6eaba11ee012e6b14f7aeef1db91afa65"} Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.310090 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.347980 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data\") pod \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.348341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b89d4\" (UniqueName: \"kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4\") pod \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.348441 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle\") pod \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\" (UID: \"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23\") " Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.373423 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4" (OuterVolumeSpecName: "kube-api-access-b89d4") pod "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" (UID: "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23"). InnerVolumeSpecName "kube-api-access-b89d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.423045 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data" (OuterVolumeSpecName: "config-data") pod "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" (UID: "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.429429 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" (UID: "2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.450524 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.450562 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b89d4\" (UniqueName: \"kubernetes.io/projected/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-kube-api-access-b89d4\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.450574 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.955173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4szp" event={"ID":"2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23","Type":"ContainerDied","Data":"739b952fb7f7618fdc9ff8328f701801cf565d917502908bb70d0f56d6918aa8"} Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.955237 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="739b952fb7f7618fdc9ff8328f701801cf565d917502908bb70d0f56d6918aa8" Jan 06 14:17:58 crc kubenswrapper[4869]: I0106 14:17:58.955383 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4szp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.230858 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231207 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="dnsmasq-dns" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231228 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="dnsmasq-dns" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231244 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="init" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231251 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="init" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231260 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc8c590-2b3d-47a0-ada1-029b0d12210d" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231266 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc8c590-2b3d-47a0-ada1-029b0d12210d" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231278 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74a40f5-6fe0-406f-bf62-6d643e7f7f22" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231285 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74a40f5-6fe0-406f-bf62-6d643e7f7f22" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231293 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" containerName="keystone-db-sync" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231299 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" containerName="keystone-db-sync" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231309 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6589777-9306-4d6c-9c5a-ae0961448cb9" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231314 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6589777-9306-4d6c-9c5a-ae0961448cb9" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231327 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9cd7d4-55b8-4008-9b37-040142576d79" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231334 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9cd7d4-55b8-4008-9b37-040142576d79" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231346 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ae2e09-f75f-4bb9-927d-6b0aba81872f" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231353 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ae2e09-f75f-4bb9-927d-6b0aba81872f" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: E0106 14:17:59.231364 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6602c6-cd27-4d18-91d8-47d0eb285a52" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231369 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6602c6-cd27-4d18-91d8-47d0eb285a52" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231504 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ae2e09-f75f-4bb9-927d-6b0aba81872f" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231519 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7566f7-7393-481e-bac9-db0f5c880b46" containerName="dnsmasq-dns" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231530 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74a40f5-6fe0-406f-bf62-6d643e7f7f22" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231540 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc8c590-2b3d-47a0-ada1-029b0d12210d" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231550 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" containerName="keystone-db-sync" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231579 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9cd7d4-55b8-4008-9b37-040142576d79" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231590 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6589777-9306-4d6c-9c5a-ae0961448cb9" containerName="mariadb-account-create-update" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.231597 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6602c6-cd27-4d18-91d8-47d0eb285a52" containerName="mariadb-database-create" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.232391 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.244200 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pwrqp"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.245198 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.249819 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.249823 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.250032 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zj5p" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.252941 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.253168 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.262347 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.268920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.268971 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269032 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269186 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269241 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269371 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdl5\" (UniqueName: \"kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgb4\" (UniqueName: \"kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.269561 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.281291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pwrqp"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.370816 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxdl5\" (UniqueName: \"kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371222 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqgb4\" (UniqueName: \"kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371306 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371321 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371356 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371385 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.371449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.372387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.372735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.372837 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.373138 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.383319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.383469 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.386283 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.388924 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.389651 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.394470 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqgb4\" (UniqueName: \"kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4\") pod \"keystone-bootstrap-pwrqp\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.412325 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxdl5\" (UniqueName: \"kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5\") pod \"dnsmasq-dns-6546db6db7-kb844\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.496766 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.498508 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.501287 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.501516 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.527654 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.557717 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nzc44"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.557964 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.558749 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.561978 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.562118 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.562207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vm6n8" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.571243 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.578249 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzc44"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579522 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579563 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579588 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579620 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579691 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579707 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s55hs\" (UniqueName: \"kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.579725 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.611921 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.661727 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5qp9n"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.664426 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.668909 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.669101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.669253 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ztln6" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682452 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682568 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtpzm\" (UniqueName: \"kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682633 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682778 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682836 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682861 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s55hs\" (UniqueName: \"kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.682887 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.683857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.686403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.686466 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-xbh9m"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.687419 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.699474 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.699909 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.700167 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7hbc4" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.700288 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.700298 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.702337 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.710884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.722297 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s55hs\" (UniqueName: \"kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs\") pod \"ceilometer-0\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.732972 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5qp9n"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.758464 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.760195 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.784229 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.784273 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.784302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785238 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785362 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtpzm\" (UniqueName: \"kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785459 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rcd\" (UniqueName: \"kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.785490 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.793736 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794193 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794227 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794272 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fc26\" (UniqueName: \"kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794305 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdxm5\" (UniqueName: \"kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794384 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794458 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.794547 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.827562 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.860046 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xbh9m"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.876412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.877271 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtpzm\" (UniqueName: \"kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm\") pod \"neutron-db-sync-nzc44\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " pod="openstack/neutron-db-sync-nzc44" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.909977 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910057 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910096 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fc26\" (UniqueName: \"kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdxm5\" (UniqueName: \"kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910187 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910265 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910344 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910435 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910487 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9rcd\" (UniqueName: \"kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910583 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.910618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.925831 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.926793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.919288 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.928510 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.929110 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.932180 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.957100 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.961285 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.961863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.962305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.962337 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.964555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.965540 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-fw77s"] Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.966189 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.966368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fc26\" (UniqueName: \"kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26\") pod \"placement-db-sync-xbh9m\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " pod="openstack/placement-db-sync-xbh9m" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.969839 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw77s" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.970907 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9rcd\" (UniqueName: \"kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd\") pod \"dnsmasq-dns-7987f74bbc-6kdvd\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.976276 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdxm5\" (UniqueName: \"kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5\") pod \"cinder-db-sync-5qp9n\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.976773 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hhr22" Jan 06 14:17:59 crc kubenswrapper[4869]: I0106 14:17:59.977023 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.003260 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.033913 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fw77s"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.038392 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzc44" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.085099 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.097540 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbh9m" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.116276 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4bz\" (UniqueName: \"kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.116410 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.116430 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.142305 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.221698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.221738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.221814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj4bz\" (UniqueName: \"kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.234155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.234650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.266837 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj4bz\" (UniqueName: \"kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz\") pod \"barbican-db-sync-fw77s\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.309180 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.420043 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.437815 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pwrqp"] Jan 06 14:18:00 crc kubenswrapper[4869]: W0106 14:18:00.745080 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48df0513_e689_44db_8e53_3aa186ab3063.slice/crio-f2f5a9ba211c5d9d65b63dafc9fda814520150fb12f238ce2c93ce66b4cac1e9 WatchSource:0}: Error finding container f2f5a9ba211c5d9d65b63dafc9fda814520150fb12f238ce2c93ce66b4cac1e9: Status 404 returned error can't find the container with id f2f5a9ba211c5d9d65b63dafc9fda814520150fb12f238ce2c93ce66b4cac1e9 Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.770255 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.790952 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nzc44"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.798565 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-xbh9m"] Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.984696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerStarted","Data":"f2f5a9ba211c5d9d65b63dafc9fda814520150fb12f238ce2c93ce66b4cac1e9"} Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.986357 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwrqp" event={"ID":"9d75a768-8cb4-4876-9923-3fbf49a6f257","Type":"ContainerStarted","Data":"53f7531212a0087efb38b32cd6deed1820f8bf2c1c68e284ed1538cbd5c09baa"} Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.994633 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzc44" event={"ID":"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3","Type":"ContainerStarted","Data":"e7340b0e208d09c982f73b28bdc50a3b3e79c90c9e174e390f1245f5b1e09dee"} Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.998479 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbh9m" event={"ID":"4f635641-cd18-4d1b-a2a6-80a4b4b0697b","Type":"ContainerStarted","Data":"968355ea0014d28312fe7c5f9d414584c51e29a87b4bedf98a9eb69c1bd14722"} Jan 06 14:18:00 crc kubenswrapper[4869]: I0106 14:18:00.999529 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kb844" event={"ID":"75260d51-070e-480e-abdb-90e8ae3d75f3","Type":"ContainerStarted","Data":"81618b7a50014c0345e8f0b989677fb936b1670371ca50ee2c29d71c96505e73"} Jan 06 14:18:01 crc kubenswrapper[4869]: I0106 14:18:01.064516 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5qp9n"] Jan 06 14:18:01 crc kubenswrapper[4869]: I0106 14:18:01.202521 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:01 crc kubenswrapper[4869]: I0106 14:18:01.215486 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-fw77s"] Jan 06 14:18:01 crc kubenswrapper[4869]: W0106 14:18:01.248972 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf3f5ac4_8b1f_40be_9d3d_eeb091dcf444.slice/crio-869b57bc46530b0488b7ed4971058fd3a711ced66012d736db3f89fac8a14532 WatchSource:0}: Error finding container 869b57bc46530b0488b7ed4971058fd3a711ced66012d736db3f89fac8a14532: Status 404 returned error can't find the container with id 869b57bc46530b0488b7ed4971058fd3a711ced66012d736db3f89fac8a14532 Jan 06 14:18:01 crc kubenswrapper[4869]: I0106 14:18:01.258015 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:18:02 crc kubenswrapper[4869]: I0106 14:18:02.009590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5qp9n" event={"ID":"5324a677-1d17-4031-ace1-8fc98bc58f9d","Type":"ContainerStarted","Data":"b6fb5126b368fb9b127c063f355589725ccc534cd7d422f752c58b67c26603aa"} Jan 06 14:18:02 crc kubenswrapper[4869]: I0106 14:18:02.011483 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" event={"ID":"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444","Type":"ContainerStarted","Data":"869b57bc46530b0488b7ed4971058fd3a711ced66012d736db3f89fac8a14532"} Jan 06 14:18:02 crc kubenswrapper[4869]: I0106 14:18:02.012886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw77s" event={"ID":"64424807-a383-4509-a25c-947f73a29e64","Type":"ContainerStarted","Data":"7558ea441d85201e34fa78128fbb1f15625eb44a4c2500d5bf05acb9ed9a98ba"} Jan 06 14:18:03 crc kubenswrapper[4869]: I0106 14:18:03.622755 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:18:03 crc kubenswrapper[4869]: I0106 14:18:03.623256 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.036351 4869 generic.go:334] "Generic (PLEG): container finished" podID="75260d51-070e-480e-abdb-90e8ae3d75f3" containerID="6b4470e64a279afafcd24205302dc96738b1336003752cad774d4d49e9a1cd04" exitCode=0 Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.036433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kb844" event={"ID":"75260d51-070e-480e-abdb-90e8ae3d75f3","Type":"ContainerDied","Data":"6b4470e64a279afafcd24205302dc96738b1336003752cad774d4d49e9a1cd04"} Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.040821 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwrqp" event={"ID":"9d75a768-8cb4-4876-9923-3fbf49a6f257","Type":"ContainerStarted","Data":"35f0ce42103960943511a227bad87a2055f5eca58a84c08a36e548ccd5d9584e"} Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.043855 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzc44" event={"ID":"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3","Type":"ContainerStarted","Data":"b5670d758ec2da20f5fbeff760a97965b5c2917bfdcb1a4c4ed32d04db93fcc3"} Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.049328 4869 generic.go:334] "Generic (PLEG): container finished" podID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerID="b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933" exitCode=0 Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.049378 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" event={"ID":"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444","Type":"ContainerDied","Data":"b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933"} Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.076262 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nzc44" podStartSLOduration=5.076106651 podStartE2EDuration="5.076106651s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:04.073737671 +0000 UTC m=+1102.613425345" watchObservedRunningTime="2026-01-06 14:18:04.076106651 +0000 UTC m=+1102.615794315" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.092435 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pwrqp" podStartSLOduration=5.092406541 podStartE2EDuration="5.092406541s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:04.090132774 +0000 UTC m=+1102.629820458" watchObservedRunningTime="2026-01-06 14:18:04.092406541 +0000 UTC m=+1102.632094205" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.466624 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.649140 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxdl5\" (UniqueName: \"kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5\") pod \"75260d51-070e-480e-abdb-90e8ae3d75f3\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.649470 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb\") pod \"75260d51-070e-480e-abdb-90e8ae3d75f3\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.649544 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config\") pod \"75260d51-070e-480e-abdb-90e8ae3d75f3\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.649683 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb\") pod \"75260d51-070e-480e-abdb-90e8ae3d75f3\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.649707 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc\") pod \"75260d51-070e-480e-abdb-90e8ae3d75f3\" (UID: \"75260d51-070e-480e-abdb-90e8ae3d75f3\") " Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.658072 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5" (OuterVolumeSpecName: "kube-api-access-nxdl5") pod "75260d51-070e-480e-abdb-90e8ae3d75f3" (UID: "75260d51-070e-480e-abdb-90e8ae3d75f3"). InnerVolumeSpecName "kube-api-access-nxdl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.683365 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75260d51-070e-480e-abdb-90e8ae3d75f3" (UID: "75260d51-070e-480e-abdb-90e8ae3d75f3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.683908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config" (OuterVolumeSpecName: "config") pod "75260d51-070e-480e-abdb-90e8ae3d75f3" (UID: "75260d51-070e-480e-abdb-90e8ae3d75f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.696212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75260d51-070e-480e-abdb-90e8ae3d75f3" (UID: "75260d51-070e-480e-abdb-90e8ae3d75f3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.706516 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75260d51-070e-480e-abdb-90e8ae3d75f3" (UID: "75260d51-070e-480e-abdb-90e8ae3d75f3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.752364 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.752413 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.752428 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxdl5\" (UniqueName: \"kubernetes.io/projected/75260d51-070e-480e-abdb-90e8ae3d75f3-kube-api-access-nxdl5\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.752442 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:04 crc kubenswrapper[4869]: I0106 14:18:04.752454 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75260d51-070e-480e-abdb-90e8ae3d75f3-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.066569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" event={"ID":"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444","Type":"ContainerStarted","Data":"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7"} Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.068129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.070203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-kb844" Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.071248 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-kb844" event={"ID":"75260d51-070e-480e-abdb-90e8ae3d75f3","Type":"ContainerDied","Data":"81618b7a50014c0345e8f0b989677fb936b1670371ca50ee2c29d71c96505e73"} Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.071306 4869 scope.go:117] "RemoveContainer" containerID="6b4470e64a279afafcd24205302dc96738b1336003752cad774d4d49e9a1cd04" Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.097464 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" podStartSLOduration=6.097418276 podStartE2EDuration="6.097418276s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:05.096260827 +0000 UTC m=+1103.635948501" watchObservedRunningTime="2026-01-06 14:18:05.097418276 +0000 UTC m=+1103.637105940" Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.180808 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.190176 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-kb844"] Jan 06 14:18:05 crc kubenswrapper[4869]: I0106 14:18:05.726097 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75260d51-070e-480e-abdb-90e8ae3d75f3" path="/var/lib/kubelet/pods/75260d51-070e-480e-abdb-90e8ae3d75f3/volumes" Jan 06 14:18:08 crc kubenswrapper[4869]: I0106 14:18:08.099433 4869 generic.go:334] "Generic (PLEG): container finished" podID="9d75a768-8cb4-4876-9923-3fbf49a6f257" containerID="35f0ce42103960943511a227bad87a2055f5eca58a84c08a36e548ccd5d9584e" exitCode=0 Jan 06 14:18:08 crc kubenswrapper[4869]: I0106 14:18:08.099617 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwrqp" event={"ID":"9d75a768-8cb4-4876-9923-3fbf49a6f257","Type":"ContainerDied","Data":"35f0ce42103960943511a227bad87a2055f5eca58a84c08a36e548ccd5d9584e"} Jan 06 14:18:10 crc kubenswrapper[4869]: I0106 14:18:10.145970 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:18:10 crc kubenswrapper[4869]: I0106 14:18:10.212388 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:18:10 crc kubenswrapper[4869]: I0106 14:18:10.212634 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" containerID="cri-o://7e3736c2d1c7509f63bc9396be06667964a0128d89a2da201618646e1e344e51" gracePeriod=10 Jan 06 14:18:11 crc kubenswrapper[4869]: I0106 14:18:11.133287 4869 generic.go:334] "Generic (PLEG): container finished" podID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerID="7e3736c2d1c7509f63bc9396be06667964a0128d89a2da201618646e1e344e51" exitCode=0 Jan 06 14:18:11 crc kubenswrapper[4869]: I0106 14:18:11.133361 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" event={"ID":"50dff5bf-77a4-43b5-aad5-b621313b4dca","Type":"ContainerDied","Data":"7e3736c2d1c7509f63bc9396be06667964a0128d89a2da201618646e1e344e51"} Jan 06 14:18:11 crc kubenswrapper[4869]: I0106 14:18:11.449405 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: connect: connection refused" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.412781 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561456 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561574 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561591 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqgb4\" (UniqueName: \"kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561642 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.561730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys\") pod \"9d75a768-8cb4-4876-9923-3fbf49a6f257\" (UID: \"9d75a768-8cb4-4876-9923-3fbf49a6f257\") " Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.587013 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.586927 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4" (OuterVolumeSpecName: "kube-api-access-fqgb4") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "kube-api-access-fqgb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.587353 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.587412 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts" (OuterVolumeSpecName: "scripts") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.597351 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data" (OuterVolumeSpecName: "config-data") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.604080 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d75a768-8cb4-4876-9923-3fbf49a6f257" (UID: "9d75a768-8cb4-4876-9923-3fbf49a6f257"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665443 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665482 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqgb4\" (UniqueName: \"kubernetes.io/projected/9d75a768-8cb4-4876-9923-3fbf49a6f257-kube-api-access-fqgb4\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665500 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665512 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665523 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:15 crc kubenswrapper[4869]: I0106 14:18:15.665534 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9d75a768-8cb4-4876-9923-3fbf49a6f257-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.180866 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pwrqp" event={"ID":"9d75a768-8cb4-4876-9923-3fbf49a6f257","Type":"ContainerDied","Data":"53f7531212a0087efb38b32cd6deed1820f8bf2c1c68e284ed1538cbd5c09baa"} Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.180914 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53f7531212a0087efb38b32cd6deed1820f8bf2c1c68e284ed1538cbd5c09baa" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.180977 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pwrqp" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.508776 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pwrqp"] Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.516447 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pwrqp"] Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.603197 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-d94xt"] Jan 06 14:18:16 crc kubenswrapper[4869]: E0106 14:18:16.603510 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d75a768-8cb4-4876-9923-3fbf49a6f257" containerName="keystone-bootstrap" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.603526 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d75a768-8cb4-4876-9923-3fbf49a6f257" containerName="keystone-bootstrap" Jan 06 14:18:16 crc kubenswrapper[4869]: E0106 14:18:16.603546 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75260d51-070e-480e-abdb-90e8ae3d75f3" containerName="init" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.603553 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="75260d51-070e-480e-abdb-90e8ae3d75f3" containerName="init" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.603708 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="75260d51-070e-480e-abdb-90e8ae3d75f3" containerName="init" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.603728 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d75a768-8cb4-4876-9923-3fbf49a6f257" containerName="keystone-bootstrap" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.604207 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.606301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.606372 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zj5p" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.606547 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.609902 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.610085 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.617529 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d94xt"] Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.686541 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.686823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.686863 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.686917 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6nc\" (UniqueName: \"kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.686966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.687019 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788132 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw6nc\" (UniqueName: \"kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788179 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788226 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.788275 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.793936 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.794747 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.801895 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.806898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.808328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.809898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw6nc\" (UniqueName: \"kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc\") pod \"keystone-bootstrap-d94xt\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:16 crc kubenswrapper[4869]: I0106 14:18:16.996158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:17 crc kubenswrapper[4869]: I0106 14:18:17.746502 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d75a768-8cb4-4876-9923-3fbf49a6f257" path="/var/lib/kubelet/pods/9d75a768-8cb4-4876-9923-3fbf49a6f257/volumes" Jan 06 14:18:21 crc kubenswrapper[4869]: I0106 14:18:21.450285 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 06 14:18:24 crc kubenswrapper[4869]: E0106 14:18:24.865475 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 06 14:18:24 crc kubenswrapper[4869]: E0106 14:18:24.866135 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdxm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5qp9n_openstack(5324a677-1d17-4031-ace1-8fc98bc58f9d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:18:24 crc kubenswrapper[4869]: E0106 14:18:24.867757 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5qp9n" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" Jan 06 14:18:25 crc kubenswrapper[4869]: E0106 14:18:25.254874 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5qp9n" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" Jan 06 14:18:25 crc kubenswrapper[4869]: E0106 14:18:25.905892 4869 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 06 14:18:25 crc kubenswrapper[4869]: E0106 14:18:25.906348 4869 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lj4bz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-fw77s_openstack(64424807-a383-4509-a25c-947f73a29e64): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 06 14:18:25 crc kubenswrapper[4869]: E0106 14:18:25.907641 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-fw77s" podUID="64424807-a383-4509-a25c-947f73a29e64" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.217733 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.266016 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" event={"ID":"50dff5bf-77a4-43b5-aad5-b621313b4dca","Type":"ContainerDied","Data":"4bf764c7f05b957b97048158578bd4243578e3879557858364d86d069f6e84a5"} Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.266062 4869 scope.go:117] "RemoveContainer" containerID="7e3736c2d1c7509f63bc9396be06667964a0128d89a2da201618646e1e344e51" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.266063 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:18:26 crc kubenswrapper[4869]: E0106 14:18:26.270296 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-fw77s" podUID="64424807-a383-4509-a25c-947f73a29e64" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.307944 4869 scope.go:117] "RemoveContainer" containerID="dc717207afca3f511272fe5e4c16c0812468e7c48e1e00014c59aa3f02663b25" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.397552 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config\") pod \"50dff5bf-77a4-43b5-aad5-b621313b4dca\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.397897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb\") pod \"50dff5bf-77a4-43b5-aad5-b621313b4dca\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.397961 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t45q2\" (UniqueName: \"kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2\") pod \"50dff5bf-77a4-43b5-aad5-b621313b4dca\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.397988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc\") pod \"50dff5bf-77a4-43b5-aad5-b621313b4dca\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.398055 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb\") pod \"50dff5bf-77a4-43b5-aad5-b621313b4dca\" (UID: \"50dff5bf-77a4-43b5-aad5-b621313b4dca\") " Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.404324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2" (OuterVolumeSpecName: "kube-api-access-t45q2") pod "50dff5bf-77a4-43b5-aad5-b621313b4dca" (UID: "50dff5bf-77a4-43b5-aad5-b621313b4dca"). InnerVolumeSpecName "kube-api-access-t45q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.436752 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50dff5bf-77a4-43b5-aad5-b621313b4dca" (UID: "50dff5bf-77a4-43b5-aad5-b621313b4dca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.436823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config" (OuterVolumeSpecName: "config") pod "50dff5bf-77a4-43b5-aad5-b621313b4dca" (UID: "50dff5bf-77a4-43b5-aad5-b621313b4dca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.442937 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50dff5bf-77a4-43b5-aad5-b621313b4dca" (UID: "50dff5bf-77a4-43b5-aad5-b621313b4dca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.450195 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50dff5bf-77a4-43b5-aad5-b621313b4dca" (UID: "50dff5bf-77a4-43b5-aad5-b621313b4dca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.454885 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.123:5353: i/o timeout" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.454983 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-rmgvq" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.499987 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.500028 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.500044 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t45q2\" (UniqueName: \"kubernetes.io/projected/50dff5bf-77a4-43b5-aad5-b621313b4dca-kube-api-access-t45q2\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.500057 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.500069 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50dff5bf-77a4-43b5-aad5-b621313b4dca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.602728 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:18:26 crc kubenswrapper[4869]: I0106 14:18:26.627973 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-rmgvq"] Jan 06 14:18:27 crc kubenswrapper[4869]: I0106 14:18:27.270525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-d94xt"] Jan 06 14:18:27 crc kubenswrapper[4869]: W0106 14:18:27.278841 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f5335d_50bb_4886_a562_e6ff443fb449.slice/crio-33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433 WatchSource:0}: Error finding container 33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433: Status 404 returned error can't find the container with id 33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433 Jan 06 14:18:27 crc kubenswrapper[4869]: I0106 14:18:27.280470 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerStarted","Data":"48f4823115caf7c48fbe4283a29199826490bfd152233c51f136cf548437054c"} Jan 06 14:18:27 crc kubenswrapper[4869]: I0106 14:18:27.284988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbh9m" event={"ID":"4f635641-cd18-4d1b-a2a6-80a4b4b0697b","Type":"ContainerStarted","Data":"427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865"} Jan 06 14:18:27 crc kubenswrapper[4869]: I0106 14:18:27.307147 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-xbh9m" podStartSLOduration=3.225420214 podStartE2EDuration="28.307122341s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="2026-01-06 14:18:00.829819672 +0000 UTC m=+1099.369507336" lastFinishedPulling="2026-01-06 14:18:25.911521759 +0000 UTC m=+1124.451209463" observedRunningTime="2026-01-06 14:18:27.305424869 +0000 UTC m=+1125.845112553" watchObservedRunningTime="2026-01-06 14:18:27.307122341 +0000 UTC m=+1125.846810005" Jan 06 14:18:27 crc kubenswrapper[4869]: I0106 14:18:27.717802 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" path="/var/lib/kubelet/pods/50dff5bf-77a4-43b5-aad5-b621313b4dca/volumes" Jan 06 14:18:28 crc kubenswrapper[4869]: I0106 14:18:28.302224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerStarted","Data":"99bf881bf4015ee51066610468ba48d3ce7e3dbdd86e6ab2e187a48969887165"} Jan 06 14:18:28 crc kubenswrapper[4869]: I0106 14:18:28.306249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d94xt" event={"ID":"c7f5335d-50bb-4886-a562-e6ff443fb449","Type":"ContainerStarted","Data":"cb6a340dd7a9247b368f52244901abb9f10b008d7f909ca03ef408225b756f21"} Jan 06 14:18:28 crc kubenswrapper[4869]: I0106 14:18:28.306277 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d94xt" event={"ID":"c7f5335d-50bb-4886-a562-e6ff443fb449","Type":"ContainerStarted","Data":"33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433"} Jan 06 14:18:28 crc kubenswrapper[4869]: I0106 14:18:28.344406 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-d94xt" podStartSLOduration=12.344389673 podStartE2EDuration="12.344389673s" podCreationTimestamp="2026-01-06 14:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:28.331238826 +0000 UTC m=+1126.870926490" watchObservedRunningTime="2026-01-06 14:18:28.344389673 +0000 UTC m=+1126.884077337" Jan 06 14:18:28 crc kubenswrapper[4869]: E0106 14:18:28.902988 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f635641_cd18_4d1b_a2a6_80a4b4b0697b.slice/crio-conmon-427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f635641_cd18_4d1b_a2a6_80a4b4b0697b.slice/crio-427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865.scope\": RecentStats: unable to find data in memory cache]" Jan 06 14:18:29 crc kubenswrapper[4869]: I0106 14:18:29.317257 4869 generic.go:334] "Generic (PLEG): container finished" podID="4f635641-cd18-4d1b-a2a6-80a4b4b0697b" containerID="427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865" exitCode=0 Jan 06 14:18:29 crc kubenswrapper[4869]: I0106 14:18:29.317342 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbh9m" event={"ID":"4f635641-cd18-4d1b-a2a6-80a4b4b0697b","Type":"ContainerDied","Data":"427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865"} Jan 06 14:18:30 crc kubenswrapper[4869]: I0106 14:18:30.325678 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" containerID="b5670d758ec2da20f5fbeff760a97965b5c2917bfdcb1a4c4ed32d04db93fcc3" exitCode=0 Jan 06 14:18:30 crc kubenswrapper[4869]: I0106 14:18:30.325862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzc44" event={"ID":"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3","Type":"ContainerDied","Data":"b5670d758ec2da20f5fbeff760a97965b5c2917bfdcb1a4c4ed32d04db93fcc3"} Jan 06 14:18:31 crc kubenswrapper[4869]: I0106 14:18:31.335947 4869 generic.go:334] "Generic (PLEG): container finished" podID="c7f5335d-50bb-4886-a562-e6ff443fb449" containerID="cb6a340dd7a9247b368f52244901abb9f10b008d7f909ca03ef408225b756f21" exitCode=0 Jan 06 14:18:31 crc kubenswrapper[4869]: I0106 14:18:31.336043 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d94xt" event={"ID":"c7f5335d-50bb-4886-a562-e6ff443fb449","Type":"ContainerDied","Data":"cb6a340dd7a9247b368f52244901abb9f10b008d7f909ca03ef408225b756f21"} Jan 06 14:18:33 crc kubenswrapper[4869]: I0106 14:18:33.622404 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:18:33 crc kubenswrapper[4869]: I0106 14:18:33.622766 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:18:33 crc kubenswrapper[4869]: I0106 14:18:33.622814 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:18:33 crc kubenswrapper[4869]: I0106 14:18:33.623454 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:18:33 crc kubenswrapper[4869]: I0106 14:18:33.623509 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e" gracePeriod=600 Jan 06 14:18:36 crc kubenswrapper[4869]: I0106 14:18:36.989614 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzc44" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.014069 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbh9m" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.017388 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config\") pod \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057795 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts\") pod \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057828 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtpzm\" (UniqueName: \"kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm\") pod \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057924 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057972 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs\") pod \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.057996 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058084 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw6nc\" (UniqueName: \"kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058133 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fc26\" (UniqueName: \"kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26\") pod \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058184 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys\") pod \"c7f5335d-50bb-4886-a562-e6ff443fb449\" (UID: \"c7f5335d-50bb-4886-a562-e6ff443fb449\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058214 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle\") pod \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle\") pod \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\" (UID: \"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.058261 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data\") pod \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\" (UID: \"4f635641-cd18-4d1b-a2a6-80a4b4b0697b\") " Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.065842 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.066057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm" (OuterVolumeSpecName: "kube-api-access-gtpzm") pod "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" (UID: "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3"). InnerVolumeSpecName "kube-api-access-gtpzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.066508 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs" (OuterVolumeSpecName: "logs") pod "4f635641-cd18-4d1b-a2a6-80a4b4b0697b" (UID: "4f635641-cd18-4d1b-a2a6-80a4b4b0697b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.072742 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts" (OuterVolumeSpecName: "scripts") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.078324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts" (OuterVolumeSpecName: "scripts") pod "4f635641-cd18-4d1b-a2a6-80a4b4b0697b" (UID: "4f635641-cd18-4d1b-a2a6-80a4b4b0697b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.078393 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26" (OuterVolumeSpecName: "kube-api-access-6fc26") pod "4f635641-cd18-4d1b-a2a6-80a4b4b0697b" (UID: "4f635641-cd18-4d1b-a2a6-80a4b4b0697b"). InnerVolumeSpecName "kube-api-access-6fc26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.078460 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.088332 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc" (OuterVolumeSpecName: "kube-api-access-jw6nc") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "kube-api-access-jw6nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.111605 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f635641-cd18-4d1b-a2a6-80a4b4b0697b" (UID: "4f635641-cd18-4d1b-a2a6-80a4b4b0697b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.124754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.126117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data" (OuterVolumeSpecName: "config-data") pod "c7f5335d-50bb-4886-a562-e6ff443fb449" (UID: "c7f5335d-50bb-4886-a562-e6ff443fb449"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.127307 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config" (OuterVolumeSpecName: "config") pod "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" (UID: "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.127966 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data" (OuterVolumeSpecName: "config-data") pod "4f635641-cd18-4d1b-a2a6-80a4b4b0697b" (UID: "4f635641-cd18-4d1b-a2a6-80a4b4b0697b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.129186 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" (UID: "1f6c4b71-32a5-473c-bdbb-d23acccaf5a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159816 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159845 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159854 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159863 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159874 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtpzm\" (UniqueName: \"kubernetes.io/projected/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3-kube-api-access-gtpzm\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159885 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159894 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159902 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159912 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159921 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw6nc\" (UniqueName: \"kubernetes.io/projected/c7f5335d-50bb-4886-a562-e6ff443fb449-kube-api-access-jw6nc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159928 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159939 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fc26\" (UniqueName: \"kubernetes.io/projected/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-kube-api-access-6fc26\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159947 4869 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7f5335d-50bb-4886-a562-e6ff443fb449-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.159954 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f635641-cd18-4d1b-a2a6-80a4b4b0697b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.393371 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerStarted","Data":"122cbb53cfeed4d7ac2f08c9895176247f11d52391a26adb0f97ca902beb0e7d"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.403967 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e" exitCode=0 Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.404056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.404090 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.404114 4869 scope.go:117] "RemoveContainer" containerID="00b21de14b885131a6ee84f5e807e1d7b8525758bcccc0f6c7a638d52ae501ed" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.412340 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nzc44" event={"ID":"1f6c4b71-32a5-473c-bdbb-d23acccaf5a3","Type":"ContainerDied","Data":"e7340b0e208d09c982f73b28bdc50a3b3e79c90c9e174e390f1245f5b1e09dee"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.412385 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7340b0e208d09c982f73b28bdc50a3b3e79c90c9e174e390f1245f5b1e09dee" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.412461 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nzc44" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.420258 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-xbh9m" event={"ID":"4f635641-cd18-4d1b-a2a6-80a4b4b0697b","Type":"ContainerDied","Data":"968355ea0014d28312fe7c5f9d414584c51e29a87b4bedf98a9eb69c1bd14722"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.420427 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="968355ea0014d28312fe7c5f9d414584c51e29a87b4bedf98a9eb69c1bd14722" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.420525 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-xbh9m" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.431382 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-d94xt" event={"ID":"c7f5335d-50bb-4886-a562-e6ff443fb449","Type":"ContainerDied","Data":"33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433"} Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.431429 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e7ec30cb0e2395cbeac9c69c4f233db741ae4aff1bfbccceca4e27c09ea433" Jan 06 14:18:37 crc kubenswrapper[4869]: I0106 14:18:37.431483 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-d94xt" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206169 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5df48645c5-c7ccn"] Jan 06 14:18:38 crc kubenswrapper[4869]: E0106 14:18:38.206639 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206651 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" Jan 06 14:18:38 crc kubenswrapper[4869]: E0106 14:18:38.206683 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" containerName="neutron-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206689 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" containerName="neutron-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: E0106 14:18:38.206708 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f5335d-50bb-4886-a562-e6ff443fb449" containerName="keystone-bootstrap" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206713 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f5335d-50bb-4886-a562-e6ff443fb449" containerName="keystone-bootstrap" Jan 06 14:18:38 crc kubenswrapper[4869]: E0106 14:18:38.206725 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="init" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206730 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="init" Jan 06 14:18:38 crc kubenswrapper[4869]: E0106 14:18:38.206742 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f635641-cd18-4d1b-a2a6-80a4b4b0697b" containerName="placement-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206748 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f635641-cd18-4d1b-a2a6-80a4b4b0697b" containerName="placement-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f635641-cd18-4d1b-a2a6-80a4b4b0697b" containerName="placement-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206895 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" containerName="neutron-db-sync" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206907 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f5335d-50bb-4886-a562-e6ff443fb449" containerName="keystone-bootstrap" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.206918 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="50dff5bf-77a4-43b5-aad5-b621313b4dca" containerName="dnsmasq-dns" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.207395 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.228278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.229456 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.229479 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6zj5p" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.229909 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.230101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.255465 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.283523 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-749bc7d596-scpc9"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.286608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.297777 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-public-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298189 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-credential-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298265 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cslbl\" (UniqueName: \"kubernetes.io/projected/2671efdf-3270-4f9e-8a55-6a6f1f52497e-kube-api-access-cslbl\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298438 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-scripts\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-combined-ca-bundle\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298562 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-config-data\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-fernet-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.298653 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-internal-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.304777 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.305072 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.312421 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.312646 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7hbc4" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.313052 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.338506 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5df48645c5-c7ccn"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.356974 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-749bc7d596-scpc9"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.396092 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.397497 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399690 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-internal-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399772 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-combined-ca-bundle\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399827 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-config-data\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-fernet-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfq8p\" (UniqueName: \"kubernetes.io/projected/9930dd36-d171-453b-ad1a-7344e6ddb59a-kube-api-access-wfq8p\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399909 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-config-data\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399942 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-internal-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-public-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.399996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-credential-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400029 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cslbl\" (UniqueName: \"kubernetes.io/projected/2671efdf-3270-4f9e-8a55-6a6f1f52497e-kube-api-access-cslbl\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400103 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9930dd36-d171-453b-ad1a-7344e6ddb59a-logs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400130 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-public-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400153 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-combined-ca-bundle\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400182 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-scripts\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.400233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-scripts\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.407002 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-scripts\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.407656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-fernet-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.407769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-combined-ca-bundle\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.408420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-config-data\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.410076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-internal-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.410099 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.414552 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-credential-keys\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.421469 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2671efdf-3270-4f9e-8a55-6a6f1f52497e-public-tls-certs\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.424794 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cslbl\" (UniqueName: \"kubernetes.io/projected/2671efdf-3270-4f9e-8a55-6a6f1f52497e-kube-api-access-cslbl\") pod \"keystone-5df48645c5-c7ccn\" (UID: \"2671efdf-3270-4f9e-8a55-6a6f1f52497e\") " pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.443962 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5qp9n" event={"ID":"5324a677-1d17-4031-ace1-8fc98bc58f9d","Type":"ContainerStarted","Data":"a92f016203396ce371d741145531991db784723689b0373db755971bd19606e6"} Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.448766 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.450645 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.453448 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.458518 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.458692 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vm6n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.459040 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.474544 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.496710 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5qp9n" podStartSLOduration=3.345678361 podStartE2EDuration="39.49668934s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="2026-01-06 14:18:01.071017178 +0000 UTC m=+1099.610704842" lastFinishedPulling="2026-01-06 14:18:37.222028157 +0000 UTC m=+1135.761715821" observedRunningTime="2026-01-06 14:18:38.469806652 +0000 UTC m=+1137.009494316" watchObservedRunningTime="2026-01-06 14:18:38.49668934 +0000 UTC m=+1137.036377004" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfq8p\" (UniqueName: \"kubernetes.io/projected/9930dd36-d171-453b-ad1a-7344e6ddb59a-kube-api-access-wfq8p\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501408 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rl9\" (UniqueName: \"kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501428 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-config-data\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501463 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501491 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt7d\" (UniqueName: \"kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501556 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501594 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9930dd36-d171-453b-ad1a-7344e6ddb59a-logs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-public-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-combined-ca-bundle\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501693 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-scripts\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501743 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501764 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-internal-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501797 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.501815 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.508368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-internal-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.513500 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9930dd36-d171-453b-ad1a-7344e6ddb59a-logs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.514056 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-config-data\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.518343 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-public-tls-certs\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.520945 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-scripts\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.525528 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfq8p\" (UniqueName: \"kubernetes.io/projected/9930dd36-d171-453b-ad1a-7344e6ddb59a-kube-api-access-wfq8p\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.526155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9930dd36-d171-453b-ad1a-7344e6ddb59a-combined-ca-bundle\") pod \"placement-749bc7d596-scpc9\" (UID: \"9930dd36-d171-453b-ad1a-7344e6ddb59a\") " pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.531608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603495 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603580 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603616 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603651 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603683 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603703 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rl9\" (UniqueName: \"kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603731 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603758 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtt7d\" (UniqueName: \"kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.603782 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.604881 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.605342 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.605609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.606722 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.608616 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.608776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.608779 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.609657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.623783 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rl9\" (UniqueName: \"kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9\") pod \"dnsmasq-dns-7b946d459c-f4ct6\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.625577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtt7d\" (UniqueName: \"kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d\") pod \"neutron-dff58f544-954n8\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.662370 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.813344 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.841201 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5df48645c5-c7ccn"] Jan 06 14:18:38 crc kubenswrapper[4869]: I0106 14:18:38.904151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:38 crc kubenswrapper[4869]: W0106 14:18:38.949500 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2671efdf_3270_4f9e_8a55_6a6f1f52497e.slice/crio-f196a42ca3ed1e060aacbb3aa93787ba4fb8007b33412cb69bbdc1d7e0236eae WatchSource:0}: Error finding container f196a42ca3ed1e060aacbb3aa93787ba4fb8007b33412cb69bbdc1d7e0236eae: Status 404 returned error can't find the container with id f196a42ca3ed1e060aacbb3aa93787ba4fb8007b33412cb69bbdc1d7e0236eae Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.478103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5df48645c5-c7ccn" event={"ID":"2671efdf-3270-4f9e-8a55-6a6f1f52497e","Type":"ContainerStarted","Data":"2e7cee5aebf685c175c2ab13be5d407669c08053af4139f631f1622a35608050"} Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.478495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5df48645c5-c7ccn" event={"ID":"2671efdf-3270-4f9e-8a55-6a6f1f52497e","Type":"ContainerStarted","Data":"f196a42ca3ed1e060aacbb3aa93787ba4fb8007b33412cb69bbdc1d7e0236eae"} Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.478553 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.501336 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5df48645c5-c7ccn" podStartSLOduration=1.501315892 podStartE2EDuration="1.501315892s" podCreationTimestamp="2026-01-06 14:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:39.496739859 +0000 UTC m=+1138.036427523" watchObservedRunningTime="2026-01-06 14:18:39.501315892 +0000 UTC m=+1138.041003556" Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.514145 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:18:39 crc kubenswrapper[4869]: W0106 14:18:39.521211 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe800df2_784f_45eb_b280_81679e58eb7a.slice/crio-b743e824761432e2e9a6432c4d8f8557378203d90da27099cd0b5a9a38a92f0a WatchSource:0}: Error finding container b743e824761432e2e9a6432c4d8f8557378203d90da27099cd0b5a9a38a92f0a: Status 404 returned error can't find the container with id b743e824761432e2e9a6432c4d8f8557378203d90da27099cd0b5a9a38a92f0a Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.617596 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-749bc7d596-scpc9"] Jan 06 14:18:39 crc kubenswrapper[4869]: I0106 14:18:39.795153 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.491401 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw77s" event={"ID":"64424807-a383-4509-a25c-947f73a29e64","Type":"ContainerStarted","Data":"3ecc254d269fae924cdc63861cb92522b9e11267e9dea177ef861735a6ab6e53"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.509206 4869 generic.go:334] "Generic (PLEG): container finished" podID="be800df2-784f-45eb-b280-81679e58eb7a" containerID="2f0fdecc92ed490e106f96f1120436f80636b21ba425d5baab810ad0fa60e0aa" exitCode=0 Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.509293 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" event={"ID":"be800df2-784f-45eb-b280-81679e58eb7a","Type":"ContainerDied","Data":"2f0fdecc92ed490e106f96f1120436f80636b21ba425d5baab810ad0fa60e0aa"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.509325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" event={"ID":"be800df2-784f-45eb-b280-81679e58eb7a","Type":"ContainerStarted","Data":"b743e824761432e2e9a6432c4d8f8557378203d90da27099cd0b5a9a38a92f0a"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.514343 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerStarted","Data":"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.514399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerStarted","Data":"21bb003af501828f5e1674e0653111d2da527c894cb8d4c6a4b665eeb8aacfbd"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.526716 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-fw77s" podStartSLOduration=2.654166092 podStartE2EDuration="41.526688589s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="2026-01-06 14:18:01.223809966 +0000 UTC m=+1099.763497630" lastFinishedPulling="2026-01-06 14:18:40.096332463 +0000 UTC m=+1138.636020127" observedRunningTime="2026-01-06 14:18:40.518734972 +0000 UTC m=+1139.058422646" watchObservedRunningTime="2026-01-06 14:18:40.526688589 +0000 UTC m=+1139.066376253" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.546555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-749bc7d596-scpc9" event={"ID":"9930dd36-d171-453b-ad1a-7344e6ddb59a","Type":"ContainerStarted","Data":"f154944f0528e548ab29803195d3a7f3d2cea2e92f6c4b836a22f03e0e48310a"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.546836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-749bc7d596-scpc9" event={"ID":"9930dd36-d171-453b-ad1a-7344e6ddb59a","Type":"ContainerStarted","Data":"7889000e53819d239a7f915fd59eea954d684db01465c219ea912d46a5066c6a"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.546860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-749bc7d596-scpc9" event={"ID":"9930dd36-d171-453b-ad1a-7344e6ddb59a","Type":"ContainerStarted","Data":"6efb2228d87a348b482a5fd8ab3d42694277e18293b3daf5935bd3168b4b84c7"} Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.549138 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.577123 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-749bc7d596-scpc9" podStartSLOduration=2.57710303 podStartE2EDuration="2.57710303s" podCreationTimestamp="2026-01-06 14:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:40.575274755 +0000 UTC m=+1139.114962419" watchObservedRunningTime="2026-01-06 14:18:40.57710303 +0000 UTC m=+1139.116790694" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.728280 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-77f9b5db4f-c4t9m"] Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.736913 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.741158 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.741421 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.761009 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77f9b5db4f-c4t9m"] Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.894938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-internal-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895134 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-ovndb-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895302 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8ms\" (UniqueName: \"kubernetes.io/projected/27991635-1274-47d8-b264-0ff73afb91aa-kube-api-access-6k8ms\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-httpd-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895508 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-combined-ca-bundle\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:40 crc kubenswrapper[4869]: I0106 14:18:40.895647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-public-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.001323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-internal-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.001896 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-ovndb-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.001970 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8ms\" (UniqueName: \"kubernetes.io/projected/27991635-1274-47d8-b264-0ff73afb91aa-kube-api-access-6k8ms\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.002019 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-httpd-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.002079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.002110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-combined-ca-bundle\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.002185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-public-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.011053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-internal-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.013038 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.013855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-combined-ca-bundle\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.013947 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-ovndb-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.019572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-httpd-config\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.020202 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27991635-1274-47d8-b264-0ff73afb91aa-public-tls-certs\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.025484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8ms\" (UniqueName: \"kubernetes.io/projected/27991635-1274-47d8-b264-0ff73afb91aa-kube-api-access-6k8ms\") pod \"neutron-77f9b5db4f-c4t9m\" (UID: \"27991635-1274-47d8-b264-0ff73afb91aa\") " pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.106944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:41 crc kubenswrapper[4869]: I0106 14:18:41.557655 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:18:42 crc kubenswrapper[4869]: I0106 14:18:42.548527 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-77f9b5db4f-c4t9m"] Jan 06 14:18:42 crc kubenswrapper[4869]: I0106 14:18:42.569230 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f9b5db4f-c4t9m" event={"ID":"27991635-1274-47d8-b264-0ff73afb91aa","Type":"ContainerStarted","Data":"5221c03b6730f20db7b5d8f81c08ca908e28756e7e2491fae87305640d42931e"} Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.594641 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" event={"ID":"be800df2-784f-45eb-b280-81679e58eb7a","Type":"ContainerStarted","Data":"27d83b31ce532544649af4ae7eca190afd86cf38960ba185596e9dec94f512dc"} Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.595265 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.597591 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerStarted","Data":"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6"} Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.598428 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.600849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f9b5db4f-c4t9m" event={"ID":"27991635-1274-47d8-b264-0ff73afb91aa","Type":"ContainerStarted","Data":"c03a2b76c5ba101a1ba2da7d58c0a087757f17a641ff422ca7a8a3a7d060eb1e"} Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.600870 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-77f9b5db4f-c4t9m" event={"ID":"27991635-1274-47d8-b264-0ff73afb91aa","Type":"ContainerStarted","Data":"12fc5a173ef153db9ee57b7ccd4771ce5757e211100724d4c604157f924db684"} Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.601299 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.627446 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" podStartSLOduration=6.62742774 podStartE2EDuration="6.62742774s" podCreationTimestamp="2026-01-06 14:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:44.610892349 +0000 UTC m=+1143.150580013" watchObservedRunningTime="2026-01-06 14:18:44.62742774 +0000 UTC m=+1143.167115394" Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.648095 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-77f9b5db4f-c4t9m" podStartSLOduration=4.648073332 podStartE2EDuration="4.648073332s" podCreationTimestamp="2026-01-06 14:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:44.631772257 +0000 UTC m=+1143.171459921" watchObservedRunningTime="2026-01-06 14:18:44.648073332 +0000 UTC m=+1143.187760996" Jan 06 14:18:44 crc kubenswrapper[4869]: I0106 14:18:44.664443 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dff58f544-954n8" podStartSLOduration=6.664425398 podStartE2EDuration="6.664425398s" podCreationTimestamp="2026-01-06 14:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:44.659066625 +0000 UTC m=+1143.198754309" watchObservedRunningTime="2026-01-06 14:18:44.664425398 +0000 UTC m=+1143.204113062" Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685158 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerStarted","Data":"544c9f73ca4c4e7c144b21e9cd652c59ff51963ceaf18901dddb766894e5eaf1"} Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685997 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685646 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-notification-agent" containerID="cri-o://99bf881bf4015ee51066610468ba48d3ce7e3dbdd86e6ab2e187a48969887165" gracePeriod=30 Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685332 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-central-agent" containerID="cri-o://48f4823115caf7c48fbe4283a29199826490bfd152233c51f136cf548437054c" gracePeriod=30 Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685727 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="proxy-httpd" containerID="cri-o://544c9f73ca4c4e7c144b21e9cd652c59ff51963ceaf18901dddb766894e5eaf1" gracePeriod=30 Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.685576 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="sg-core" containerID="cri-o://122cbb53cfeed4d7ac2f08c9895176247f11d52391a26adb0f97ca902beb0e7d" gracePeriod=30 Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.690256 4869 generic.go:334] "Generic (PLEG): container finished" podID="64424807-a383-4509-a25c-947f73a29e64" containerID="3ecc254d269fae924cdc63861cb92522b9e11267e9dea177ef861735a6ab6e53" exitCode=0 Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.690429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw77s" event={"ID":"64424807-a383-4509-a25c-947f73a29e64","Type":"ContainerDied","Data":"3ecc254d269fae924cdc63861cb92522b9e11267e9dea177ef861735a6ab6e53"} Jan 06 14:18:51 crc kubenswrapper[4869]: I0106 14:18:51.729709 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.749373559 podStartE2EDuration="52.729683182s" podCreationTimestamp="2026-01-06 14:17:59 +0000 UTC" firstStartedPulling="2026-01-06 14:18:00.787572648 +0000 UTC m=+1099.327260312" lastFinishedPulling="2026-01-06 14:18:50.767882281 +0000 UTC m=+1149.307569935" observedRunningTime="2026-01-06 14:18:51.719446077 +0000 UTC m=+1150.259133801" watchObservedRunningTime="2026-01-06 14:18:51.729683182 +0000 UTC m=+1150.269370856" Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.700996 4869 generic.go:334] "Generic (PLEG): container finished" podID="48df0513-e689-44db-8e53-3aa186ab3063" containerID="544c9f73ca4c4e7c144b21e9cd652c59ff51963ceaf18901dddb766894e5eaf1" exitCode=0 Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.701034 4869 generic.go:334] "Generic (PLEG): container finished" podID="48df0513-e689-44db-8e53-3aa186ab3063" containerID="122cbb53cfeed4d7ac2f08c9895176247f11d52391a26adb0f97ca902beb0e7d" exitCode=2 Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.701047 4869 generic.go:334] "Generic (PLEG): container finished" podID="48df0513-e689-44db-8e53-3aa186ab3063" containerID="48f4823115caf7c48fbe4283a29199826490bfd152233c51f136cf548437054c" exitCode=0 Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.701075 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerDied","Data":"544c9f73ca4c4e7c144b21e9cd652c59ff51963ceaf18901dddb766894e5eaf1"} Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.701477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerDied","Data":"122cbb53cfeed4d7ac2f08c9895176247f11d52391a26adb0f97ca902beb0e7d"} Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.701498 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerDied","Data":"48f4823115caf7c48fbe4283a29199826490bfd152233c51f136cf548437054c"} Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.705902 4869 generic.go:334] "Generic (PLEG): container finished" podID="5324a677-1d17-4031-ace1-8fc98bc58f9d" containerID="a92f016203396ce371d741145531991db784723689b0373db755971bd19606e6" exitCode=0 Jan 06 14:18:52 crc kubenswrapper[4869]: I0106 14:18:52.705974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5qp9n" event={"ID":"5324a677-1d17-4031-ace1-8fc98bc58f9d","Type":"ContainerDied","Data":"a92f016203396ce371d741145531991db784723689b0373db755971bd19606e6"} Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.048932 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.242073 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj4bz\" (UniqueName: \"kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz\") pod \"64424807-a383-4509-a25c-947f73a29e64\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.242275 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle\") pod \"64424807-a383-4509-a25c-947f73a29e64\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.242323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data\") pod \"64424807-a383-4509-a25c-947f73a29e64\" (UID: \"64424807-a383-4509-a25c-947f73a29e64\") " Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.248009 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz" (OuterVolumeSpecName: "kube-api-access-lj4bz") pod "64424807-a383-4509-a25c-947f73a29e64" (UID: "64424807-a383-4509-a25c-947f73a29e64"). InnerVolumeSpecName "kube-api-access-lj4bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.252952 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64424807-a383-4509-a25c-947f73a29e64" (UID: "64424807-a383-4509-a25c-947f73a29e64"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.266705 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64424807-a383-4509-a25c-947f73a29e64" (UID: "64424807-a383-4509-a25c-947f73a29e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.345062 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.345112 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64424807-a383-4509-a25c-947f73a29e64-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.345128 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj4bz\" (UniqueName: \"kubernetes.io/projected/64424807-a383-4509-a25c-947f73a29e64-kube-api-access-lj4bz\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.715187 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-fw77s" event={"ID":"64424807-a383-4509-a25c-947f73a29e64","Type":"ContainerDied","Data":"7558ea441d85201e34fa78128fbb1f15625eb44a4c2500d5bf05acb9ed9a98ba"} Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.715213 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-fw77s" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.715228 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7558ea441d85201e34fa78128fbb1f15625eb44a4c2500d5bf05acb9ed9a98ba" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.830862 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.904058 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:18:53 crc kubenswrapper[4869]: I0106 14:18:53.904308 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="dnsmasq-dns" containerID="cri-o://2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7" gracePeriod=10 Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.015001 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7df5f64db9-vv7hq"] Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.015651 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64424807-a383-4509-a25c-947f73a29e64" containerName="barbican-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.015734 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="64424807-a383-4509-a25c-947f73a29e64" containerName="barbican-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.015890 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="64424807-a383-4509-a25c-947f73a29e64" containerName="barbican-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.016779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.018879 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.020494 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-hhr22" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.020829 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.052644 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df5f64db9-vv7hq"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.070273 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5599cd5d56-8h5sr"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.071697 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.077922 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.096051 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5599cd5d56-8h5sr"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.166411 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172315 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c68901-bae4-40c6-a65d-a7b0834e2d71-logs\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvqg\" (UniqueName: \"kubernetes.io/projected/a6c68901-bae4-40c6-a65d-a7b0834e2d71-kube-api-access-8mvqg\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172436 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data-custom\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172460 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172482 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce6b66f-ac24-4b7b-98aa-39a87666921b-logs\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172497 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gvl\" (UniqueName: \"kubernetes.io/projected/fce6b66f-ac24-4b7b-98aa-39a87666921b-kube-api-access-x8gvl\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172528 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data-custom\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172762 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.172553 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-combined-ca-bundle\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.173033 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-combined-ca-bundle\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.183295 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.197455 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275212 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-combined-ca-bundle\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275396 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275867 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.275934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c68901-bae4-40c6-a65d-a7b0834e2d71-logs\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276000 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mvqg\" (UniqueName: \"kubernetes.io/projected/a6c68901-bae4-40c6-a65d-a7b0834e2d71-kube-api-access-8mvqg\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276100 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data-custom\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276173 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lkc\" (UniqueName: \"kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276242 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276306 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276391 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8gvl\" (UniqueName: \"kubernetes.io/projected/fce6b66f-ac24-4b7b-98aa-39a87666921b-kube-api-access-x8gvl\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276466 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce6b66f-ac24-4b7b-98aa-39a87666921b-logs\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data-custom\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.276624 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-combined-ca-bundle\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.278016 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c68901-bae4-40c6-a65d-a7b0834e2d71-logs\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.278423 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fce6b66f-ac24-4b7b-98aa-39a87666921b-logs\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.287300 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data-custom\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.287324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data-custom\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.287809 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.288226 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" containerName="cinder-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.288241 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" containerName="cinder-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.288383 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" containerName="cinder-db-sync" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.289303 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.290758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-config-data\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.292455 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.297501 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c68901-bae4-40c6-a65d-a7b0834e2d71-combined-ca-bundle\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.300525 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mvqg\" (UniqueName: \"kubernetes.io/projected/a6c68901-bae4-40c6-a65d-a7b0834e2d71-kube-api-access-8mvqg\") pod \"barbican-worker-7df5f64db9-vv7hq\" (UID: \"a6c68901-bae4-40c6-a65d-a7b0834e2d71\") " pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.301081 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-combined-ca-bundle\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.305416 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8gvl\" (UniqueName: \"kubernetes.io/projected/fce6b66f-ac24-4b7b-98aa-39a87666921b-kube-api-access-x8gvl\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.306961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce6b66f-ac24-4b7b-98aa-39a87666921b-config-data\") pod \"barbican-keystone-listener-5599cd5d56-8h5sr\" (UID: \"fce6b66f-ac24-4b7b-98aa-39a87666921b\") " pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.330306 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386477 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386609 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386753 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdxm5\" (UniqueName: \"kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.386809 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data\") pod \"5324a677-1d17-4031-ace1-8fc98bc58f9d\" (UID: \"5324a677-1d17-4031-ace1-8fc98bc58f9d\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.387231 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.387302 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.387322 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.387372 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5lkc\" (UniqueName: \"kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.387406 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.388699 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.391301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.391391 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5324a677-1d17-4031-ace1-8fc98bc58f9d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.392357 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.392521 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.394979 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts" (OuterVolumeSpecName: "scripts") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.395413 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.399378 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.401174 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5" (OuterVolumeSpecName: "kube-api-access-pdxm5") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "kube-api-access-pdxm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.404807 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5lkc\" (UniqueName: \"kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc\") pod \"dnsmasq-dns-6bb684768f-j79b7\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.429796 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.454521 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data" (OuterVolumeSpecName: "config-data") pod "5324a677-1d17-4031-ace1-8fc98bc58f9d" (UID: "5324a677-1d17-4031-ace1-8fc98bc58f9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.467569 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7df5f64db9-vv7hq" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492453 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btvp\" (UniqueName: \"kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492495 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492532 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492611 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492737 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492788 4869 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.492934 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.493409 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.493441 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdxm5\" (UniqueName: \"kubernetes.io/projected/5324a677-1d17-4031-ace1-8fc98bc58f9d-kube-api-access-pdxm5\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.493453 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5324a677-1d17-4031-ace1-8fc98bc58f9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.504927 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.518354 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.594742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb\") pod \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.594854 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config\") pod \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.594883 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9rcd\" (UniqueName: \"kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd\") pod \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.594911 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc\") pod \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb\") pod \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\" (UID: \"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444\") " Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595428 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btvp\" (UniqueName: \"kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595461 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595502 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.595602 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.596170 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.613251 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd" (OuterVolumeSpecName: "kube-api-access-q9rcd") pod "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" (UID: "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444"). InnerVolumeSpecName "kube-api-access-q9rcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.613700 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.620452 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.636169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.640174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btvp\" (UniqueName: \"kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp\") pod \"barbican-api-6cbc78c6fb-zx9xt\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.671026 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" (UID: "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.673440 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.678463 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config" (OuterVolumeSpecName: "config") pod "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" (UID: "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.683829 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" (UID: "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.686605 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" (UID: "bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.702658 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.702770 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.702796 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.702813 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9rcd\" (UniqueName: \"kubernetes.io/projected/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-kube-api-access-q9rcd\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.702896 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.759001 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5qp9n" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.759026 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5qp9n" event={"ID":"5324a677-1d17-4031-ace1-8fc98bc58f9d","Type":"ContainerDied","Data":"b6fb5126b368fb9b127c063f355589725ccc534cd7d422f752c58b67c26603aa"} Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.759081 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6fb5126b368fb9b127c063f355589725ccc534cd7d422f752c58b67c26603aa" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.763918 4869 generic.go:334] "Generic (PLEG): container finished" podID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerID="2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7" exitCode=0 Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.763968 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" event={"ID":"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444","Type":"ContainerDied","Data":"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7"} Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.763999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" event={"ID":"bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444","Type":"ContainerDied","Data":"869b57bc46530b0488b7ed4971058fd3a711ced66012d736db3f89fac8a14532"} Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.764019 4869 scope.go:117] "RemoveContainer" containerID="2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.764146 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-6kdvd" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.808426 4869 scope.go:117] "RemoveContainer" containerID="b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.820478 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.831646 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-6kdvd"] Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.860305 4869 scope.go:117] "RemoveContainer" containerID="2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7" Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.862877 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7\": container with ID starting with 2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7 not found: ID does not exist" containerID="2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.862926 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7"} err="failed to get container status \"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7\": rpc error: code = NotFound desc = could not find container \"2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7\": container with ID starting with 2e0e8e9eed377a0eccfa2667baab160c138f6599b0b99e37bed2f54556224ba7 not found: ID does not exist" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.862955 4869 scope.go:117] "RemoveContainer" containerID="b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933" Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.865805 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933\": container with ID starting with b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933 not found: ID does not exist" containerID="b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.865879 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933"} err="failed to get container status \"b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933\": rpc error: code = NotFound desc = could not find container \"b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933\": container with ID starting with b48c4cbf6de15bc69d251c70f2b0e04f1294979382c087ece0b63525ad038933 not found: ID does not exist" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.954490 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.954916 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="init" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.954940 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="init" Jan 06 14:18:54 crc kubenswrapper[4869]: E0106 14:18:54.954978 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="dnsmasq-dns" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.954985 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="dnsmasq-dns" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.955185 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" containerName="dnsmasq-dns" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.956039 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.960142 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.960571 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.960705 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ztln6" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.960828 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 06 14:18:54 crc kubenswrapper[4869]: I0106 14:18:54.977820 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.011512 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.011625 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.011788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.011897 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.011990 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsfch\" (UniqueName: \"kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.012075 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.038613 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.102758 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7df5f64db9-vv7hq"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121169 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121329 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsfch\" (UniqueName: \"kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121394 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121517 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.121594 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.123295 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.132170 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.144881 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.164120 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.204453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.212962 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.238922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsfch\" (UniqueName: \"kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.278461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data\") pod \"cinder-scheduler-0\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.279420 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.349564 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.385714 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5599cd5d56-8h5sr"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.403030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.403114 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.403141 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.403169 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6t28\" (UniqueName: \"kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.403241 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.410805 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.451514 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.453021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.456007 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.480361 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.505653 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.505805 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.505838 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.505863 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6t28\" (UniqueName: \"kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.505931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.506820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.506855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.507037 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.508055 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.524926 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6t28\" (UniqueName: \"kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28\") pod \"dnsmasq-dns-6d97fcdd8f-frkrl\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.578233 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609604 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609710 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609804 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609831 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khnj5\" (UniqueName: \"kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609850 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609883 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.609920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.665676 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.711054 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khnj5\" (UniqueName: \"kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.711520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.712935 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.713021 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.713056 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.713173 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.713160 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.714814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.722044 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.723483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.723828 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.725699 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.726375 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444" path="/var/lib/kubelet/pods/bf3f5ac4-8b1f-40be-9d3d-eeb091dcf444/volumes" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.735330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.743068 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khnj5\" (UniqueName: \"kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5\") pod \"cinder-api-0\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " pod="openstack/cinder-api-0" Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.791372 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerStarted","Data":"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.791421 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerStarted","Data":"cf88e1d3323ce94a910bad03595b42c2f19bde0b14ef6750a6e1b7dbf27a6493"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.793164 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" event={"ID":"fce6b66f-ac24-4b7b-98aa-39a87666921b","Type":"ContainerStarted","Data":"8210435eb3598aa19a5c046a415d8e1099fa67a1939c7ef47cf6518107424d92"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.794363 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df5f64db9-vv7hq" event={"ID":"a6c68901-bae4-40c6-a65d-a7b0834e2d71","Type":"ContainerStarted","Data":"5d66a2f8d2400b7f03091561e18d1ee2eacc3916aa3fd2d7e6dc479030c7eee1"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.798025 4869 generic.go:334] "Generic (PLEG): container finished" podID="56137a76-427a-4b2d-ae18-d9a7afb2fd98" containerID="e8fa3fffbed2748020bf79557497aab457a7aa816337b9f5661c62b338221e00" exitCode=0 Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.798057 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" event={"ID":"56137a76-427a-4b2d-ae18-d9a7afb2fd98","Type":"ContainerDied","Data":"e8fa3fffbed2748020bf79557497aab457a7aa816337b9f5661c62b338221e00"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.798073 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" event={"ID":"56137a76-427a-4b2d-ae18-d9a7afb2fd98","Type":"ContainerStarted","Data":"d2cdeb1714a4f2c54d9c7c9b41da4ca1248e9c58762c149cf7b699049cdb64f2"} Jan 06 14:18:55 crc kubenswrapper[4869]: I0106 14:18:55.799254 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.129109 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.179681 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.337891 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.434047 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.445132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc\") pod \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.445194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb\") pod \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.445349 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config\") pod \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.445389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5lkc\" (UniqueName: \"kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc\") pod \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.445439 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb\") pod \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\" (UID: \"56137a76-427a-4b2d-ae18-d9a7afb2fd98\") " Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.453814 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc" (OuterVolumeSpecName: "kube-api-access-d5lkc") pod "56137a76-427a-4b2d-ae18-d9a7afb2fd98" (UID: "56137a76-427a-4b2d-ae18-d9a7afb2fd98"). InnerVolumeSpecName "kube-api-access-d5lkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:56 crc kubenswrapper[4869]: W0106 14:18:56.462774 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a881307_a568_4715_95d3_59aa91b69477.slice/crio-edce37e75bca54edc97174ba20a960f96481295b6d9a53867c7ab8ef89bcf0a5 WatchSource:0}: Error finding container edce37e75bca54edc97174ba20a960f96481295b6d9a53867c7ab8ef89bcf0a5: Status 404 returned error can't find the container with id edce37e75bca54edc97174ba20a960f96481295b6d9a53867c7ab8ef89bcf0a5 Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.469588 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56137a76-427a-4b2d-ae18-d9a7afb2fd98" (UID: "56137a76-427a-4b2d-ae18-d9a7afb2fd98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.473355 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56137a76-427a-4b2d-ae18-d9a7afb2fd98" (UID: "56137a76-427a-4b2d-ae18-d9a7afb2fd98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.479427 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56137a76-427a-4b2d-ae18-d9a7afb2fd98" (UID: "56137a76-427a-4b2d-ae18-d9a7afb2fd98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.485869 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config" (OuterVolumeSpecName: "config") pod "56137a76-427a-4b2d-ae18-d9a7afb2fd98" (UID: "56137a76-427a-4b2d-ae18-d9a7afb2fd98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.548753 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.548812 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5lkc\" (UniqueName: \"kubernetes.io/projected/56137a76-427a-4b2d-ae18-d9a7afb2fd98-kube-api-access-d5lkc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.548830 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.548845 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.548859 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56137a76-427a-4b2d-ae18-d9a7afb2fd98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.806889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerStarted","Data":"6b705cd327ed6a3496b353979e847f4e50dc019b0161b66aea3274d2d6e5d90d"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.809423 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerStarted","Data":"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.809843 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.811880 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.812111 4869 generic.go:334] "Generic (PLEG): container finished" podID="798c903a-0423-4e97-a986-9b705bb64ad9" containerID="ab38195008dce8827bc486d5644e9325e3a72d0d683b3b5277e0c7894325a7b8" exitCode=0 Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.812174 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" event={"ID":"798c903a-0423-4e97-a986-9b705bb64ad9","Type":"ContainerDied","Data":"ab38195008dce8827bc486d5644e9325e3a72d0d683b3b5277e0c7894325a7b8"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.812202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" event={"ID":"798c903a-0423-4e97-a986-9b705bb64ad9","Type":"ContainerStarted","Data":"727a1a07656f76bd5c48473425171d691a2cdf757179c1295cee38e8520f797d"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.815530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerStarted","Data":"edce37e75bca54edc97174ba20a960f96481295b6d9a53867c7ab8ef89bcf0a5"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.818071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" event={"ID":"56137a76-427a-4b2d-ae18-d9a7afb2fd98","Type":"ContainerDied","Data":"d2cdeb1714a4f2c54d9c7c9b41da4ca1248e9c58762c149cf7b699049cdb64f2"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.818196 4869 scope.go:117] "RemoveContainer" containerID="e8fa3fffbed2748020bf79557497aab457a7aa816337b9f5661c62b338221e00" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.818112 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-j79b7" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.822103 4869 generic.go:334] "Generic (PLEG): container finished" podID="48df0513-e689-44db-8e53-3aa186ab3063" containerID="99bf881bf4015ee51066610468ba48d3ce7e3dbdd86e6ab2e187a48969887165" exitCode=0 Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.822143 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerDied","Data":"99bf881bf4015ee51066610468ba48d3ce7e3dbdd86e6ab2e187a48969887165"} Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.835501 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" podStartSLOduration=2.835466784 podStartE2EDuration="2.835466784s" podCreationTimestamp="2026-01-06 14:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:56.826162354 +0000 UTC m=+1155.365850038" watchObservedRunningTime="2026-01-06 14:18:56.835466784 +0000 UTC m=+1155.375154448" Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.898600 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:56 crc kubenswrapper[4869]: I0106 14:18:56.905842 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-j79b7"] Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.505892 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584553 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584615 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584797 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584831 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s55hs\" (UniqueName: \"kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584852 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.584956 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.585014 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd\") pod \"48df0513-e689-44db-8e53-3aa186ab3063\" (UID: \"48df0513-e689-44db-8e53-3aa186ab3063\") " Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.589720 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.590325 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.627164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts" (OuterVolumeSpecName: "scripts") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.639693 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs" (OuterVolumeSpecName: "kube-api-access-s55hs") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "kube-api-access-s55hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.643775 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.693720 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s55hs\" (UniqueName: \"kubernetes.io/projected/48df0513-e689-44db-8e53-3aa186ab3063-kube-api-access-s55hs\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.693925 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.693942 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.693952 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/48df0513-e689-44db-8e53-3aa186ab3063-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.702782 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.721710 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56137a76-427a-4b2d-ae18-d9a7afb2fd98" path="/var/lib/kubelet/pods/56137a76-427a-4b2d-ae18-d9a7afb2fd98/volumes" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.762338 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.782227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data" (OuterVolumeSpecName: "config-data") pod "48df0513-e689-44db-8e53-3aa186ab3063" (UID: "48df0513-e689-44db-8e53-3aa186ab3063"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.798364 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.798403 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.798442 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/48df0513-e689-44db-8e53-3aa186ab3063-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.846404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" event={"ID":"798c903a-0423-4e97-a986-9b705bb64ad9","Type":"ContainerStarted","Data":"6117bc10fd17814766b0f4e171951d82d40b0ededb7497190e05cdf7f5c30e2e"} Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.847220 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.851133 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df5f64db9-vv7hq" event={"ID":"a6c68901-bae4-40c6-a65d-a7b0834e2d71","Type":"ContainerStarted","Data":"543a1a1ea6c5ed5a560c94b2350d56183ee78179a217211faadd869170986654"} Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.857218 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerStarted","Data":"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313"} Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.872110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"48df0513-e689-44db-8e53-3aa186ab3063","Type":"ContainerDied","Data":"f2f5a9ba211c5d9d65b63dafc9fda814520150fb12f238ce2c93ce66b4cac1e9"} Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.872154 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.872192 4869 scope.go:117] "RemoveContainer" containerID="544c9f73ca4c4e7c144b21e9cd652c59ff51963ceaf18901dddb766894e5eaf1" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.878760 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" event={"ID":"fce6b66f-ac24-4b7b-98aa-39a87666921b","Type":"ContainerStarted","Data":"d68a3077a9d31721f3de9ade7358b326ad8e69b1209a24d858a68c6208f38fea"} Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.882064 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" podStartSLOduration=2.882035578 podStartE2EDuration="2.882035578s" podCreationTimestamp="2026-01-06 14:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:57.869773453 +0000 UTC m=+1156.409461117" watchObservedRunningTime="2026-01-06 14:18:57.882035578 +0000 UTC m=+1156.421723242" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.894930 4869 scope.go:117] "RemoveContainer" containerID="122cbb53cfeed4d7ac2f08c9895176247f11d52391a26adb0f97ca902beb0e7d" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.930604 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.943759 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.966561 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:57 crc kubenswrapper[4869]: E0106 14:18:57.967280 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-central-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967301 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-central-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: E0106 14:18:57.967326 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="proxy-httpd" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967335 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="proxy-httpd" Jan 06 14:18:57 crc kubenswrapper[4869]: E0106 14:18:57.967355 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="sg-core" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967365 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="sg-core" Jan 06 14:18:57 crc kubenswrapper[4869]: E0106 14:18:57.967384 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56137a76-427a-4b2d-ae18-d9a7afb2fd98" containerName="init" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967392 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="56137a76-427a-4b2d-ae18-d9a7afb2fd98" containerName="init" Jan 06 14:18:57 crc kubenswrapper[4869]: E0106 14:18:57.967409 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-notification-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967418 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-notification-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967611 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-central-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967621 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="ceilometer-notification-agent" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967641 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="proxy-httpd" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967687 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="48df0513-e689-44db-8e53-3aa186ab3063" containerName="sg-core" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.967700 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="56137a76-427a-4b2d-ae18-d9a7afb2fd98" containerName="init" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.969845 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.969956 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.983266 4869 scope.go:117] "RemoveContainer" containerID="99bf881bf4015ee51066610468ba48d3ce7e3dbdd86e6ab2e187a48969887165" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.986442 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:18:57 crc kubenswrapper[4869]: I0106 14:18:57.986656 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.070150 4869 scope.go:117] "RemoveContainer" containerID="48f4823115caf7c48fbe4283a29199826490bfd152233c51f136cf548437054c" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106618 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106687 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106722 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106758 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmjr\" (UniqueName: \"kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.106996 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.107046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.208573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.208701 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.208825 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.208859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.209725 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.209430 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.209578 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.209884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.209993 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmjr\" (UniqueName: \"kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.212473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.213641 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.213753 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.219033 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.237535 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmjr\" (UniqueName: \"kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr\") pod \"ceilometer-0\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.314189 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.808225 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:18:58 crc kubenswrapper[4869]: W0106 14:18:58.815063 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc37f611_b36d_45d4_9434_9b8ca3e83efb.slice/crio-319ee49598248f702f1c739930beb87284ed9768ddb168c92116f380986879e2 WatchSource:0}: Error finding container 319ee49598248f702f1c739930beb87284ed9768ddb168c92116f380986879e2: Status 404 returned error can't find the container with id 319ee49598248f702f1c739930beb87284ed9768ddb168c92116f380986879e2 Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.896087 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" event={"ID":"fce6b66f-ac24-4b7b-98aa-39a87666921b","Type":"ContainerStarted","Data":"c9012ce499fe2b6730d510a88362e5553873227d7e1cd3a11be723e0f92a42e2"} Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.908621 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7df5f64db9-vv7hq" event={"ID":"a6c68901-bae4-40c6-a65d-a7b0834e2d71","Type":"ContainerStarted","Data":"840237b698c4cbe09e5622ef11617ed1702c2737db47fad40f195e34605a873e"} Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.913487 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerStarted","Data":"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d"} Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.913692 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api-log" containerID="cri-o://4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" gracePeriod=30 Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.913884 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.913930 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api" containerID="cri-o://6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" gracePeriod=30 Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.914999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5599cd5d56-8h5sr" podStartSLOduration=2.7802023350000002 podStartE2EDuration="4.914980473s" podCreationTimestamp="2026-01-06 14:18:54 +0000 UTC" firstStartedPulling="2026-01-06 14:18:55.23576765 +0000 UTC m=+1153.775455314" lastFinishedPulling="2026-01-06 14:18:57.370545778 +0000 UTC m=+1155.910233452" observedRunningTime="2026-01-06 14:18:58.912916911 +0000 UTC m=+1157.452604585" watchObservedRunningTime="2026-01-06 14:18:58.914980473 +0000 UTC m=+1157.454668137" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.928166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerStarted","Data":"319ee49598248f702f1c739930beb87284ed9768ddb168c92116f380986879e2"} Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.934470 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7df5f64db9-vv7hq" podStartSLOduration=3.58094032 podStartE2EDuration="5.934455365s" podCreationTimestamp="2026-01-06 14:18:53 +0000 UTC" firstStartedPulling="2026-01-06 14:18:55.014842239 +0000 UTC m=+1153.554529903" lastFinishedPulling="2026-01-06 14:18:57.368357284 +0000 UTC m=+1155.908044948" observedRunningTime="2026-01-06 14:18:58.931814511 +0000 UTC m=+1157.471502175" watchObservedRunningTime="2026-01-06 14:18:58.934455365 +0000 UTC m=+1157.474143019" Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.947886 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerStarted","Data":"24084d3956b81c4f957adecf76348045d76c5cfa6d153bd1f2b28e981c44cc6e"} Jan 06 14:18:58 crc kubenswrapper[4869]: I0106 14:18:58.964367 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.964346888 podStartE2EDuration="3.964346888s" podCreationTimestamp="2026-01-06 14:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:18:58.963777903 +0000 UTC m=+1157.503465567" watchObservedRunningTime="2026-01-06 14:18:58.964346888 +0000 UTC m=+1157.504034552" Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.718184 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48df0513-e689-44db-8e53-3aa186ab3063" path="/var/lib/kubelet/pods/48df0513-e689-44db-8e53-3aa186ab3063/volumes" Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.904337 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.980501 4869 generic.go:334] "Generic (PLEG): container finished" podID="0a881307-a568-4715-95d3-59aa91b69477" containerID="6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" exitCode=0 Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.981160 4869 generic.go:334] "Generic (PLEG): container finished" podID="0a881307-a568-4715-95d3-59aa91b69477" containerID="4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" exitCode=143 Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.980611 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerDied","Data":"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d"} Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.981568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerDied","Data":"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313"} Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.981795 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a881307-a568-4715-95d3-59aa91b69477","Type":"ContainerDied","Data":"edce37e75bca54edc97174ba20a960f96481295b6d9a53867c7ab8ef89bcf0a5"} Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.981757 4869 scope.go:117] "RemoveContainer" containerID="6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.980578 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.984415 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerStarted","Data":"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc"} Jan 06 14:18:59 crc kubenswrapper[4869]: I0106 14:18:59.987992 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerStarted","Data":"2fc96364fbc6620e499bcb6b8edd7889210c92ac1568d0bdf1d06ee388fbc189"} Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.014872 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.554412587 podStartE2EDuration="6.014846148s" podCreationTimestamp="2026-01-06 14:18:54 +0000 UTC" firstStartedPulling="2026-01-06 14:18:56.187471799 +0000 UTC m=+1154.727159463" lastFinishedPulling="2026-01-06 14:18:57.64790536 +0000 UTC m=+1156.187593024" observedRunningTime="2026-01-06 14:19:00.012572461 +0000 UTC m=+1158.552260125" watchObservedRunningTime="2026-01-06 14:19:00.014846148 +0000 UTC m=+1158.554533812" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.033211 4869 scope.go:117] "RemoveContainer" containerID="4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058486 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058711 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058756 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058868 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.058952 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.059019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khnj5\" (UniqueName: \"kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5\") pod \"0a881307-a568-4715-95d3-59aa91b69477\" (UID: \"0a881307-a568-4715-95d3-59aa91b69477\") " Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.061072 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.061611 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs" (OuterVolumeSpecName: "logs") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.064029 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5" (OuterVolumeSpecName: "kube-api-access-khnj5") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "kube-api-access-khnj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.064394 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts" (OuterVolumeSpecName: "scripts") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.064893 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.090042 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.113674 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data" (OuterVolumeSpecName: "config-data") pod "0a881307-a568-4715-95d3-59aa91b69477" (UID: "0a881307-a568-4715-95d3-59aa91b69477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160756 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a881307-a568-4715-95d3-59aa91b69477-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160797 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160811 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a881307-a568-4715-95d3-59aa91b69477-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160824 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khnj5\" (UniqueName: \"kubernetes.io/projected/0a881307-a568-4715-95d3-59aa91b69477-kube-api-access-khnj5\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160839 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160849 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.160860 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a881307-a568-4715-95d3-59aa91b69477-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.187555 4869 scope.go:117] "RemoveContainer" containerID="6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" Jan 06 14:19:00 crc kubenswrapper[4869]: E0106 14:19:00.188038 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d\": container with ID starting with 6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d not found: ID does not exist" containerID="6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188091 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d"} err="failed to get container status \"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d\": rpc error: code = NotFound desc = could not find container \"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d\": container with ID starting with 6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d not found: ID does not exist" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188122 4869 scope.go:117] "RemoveContainer" containerID="4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" Jan 06 14:19:00 crc kubenswrapper[4869]: E0106 14:19:00.188434 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313\": container with ID starting with 4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313 not found: ID does not exist" containerID="4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188518 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313"} err="failed to get container status \"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313\": rpc error: code = NotFound desc = could not find container \"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313\": container with ID starting with 4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313 not found: ID does not exist" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188534 4869 scope.go:117] "RemoveContainer" containerID="6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188795 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d"} err="failed to get container status \"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d\": rpc error: code = NotFound desc = could not find container \"6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d\": container with ID starting with 6dd5df6c6a536286d1eac0f69bc4a3c632e63c375e42acd46f87a82b6d2d9a3d not found: ID does not exist" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.188821 4869 scope.go:117] "RemoveContainer" containerID="4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.189036 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313"} err="failed to get container status \"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313\": rpc error: code = NotFound desc = could not find container \"4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313\": container with ID starting with 4d739cf54ed174894d76d5d15b784f62e5ef5609637c57258eb64aa320b4d313 not found: ID does not exist" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.320269 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.343805 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.355800 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:19:00 crc kubenswrapper[4869]: E0106 14:19:00.356185 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.356207 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api" Jan 06 14:19:00 crc kubenswrapper[4869]: E0106 14:19:00.356227 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api-log" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.356236 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api-log" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.356424 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api-log" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.356450 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a881307-a568-4715-95d3-59aa91b69477" containerName="cinder-api" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.357502 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.361853 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.362030 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.362233 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.369083 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.464870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.464915 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.464943 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.464963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhc48\" (UniqueName: \"kubernetes.io/projected/bdcb07fc-c984-417a-aecb-6f0a2a83f487-kube-api-access-rhc48\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.465206 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.465398 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdcb07fc-c984-417a-aecb-6f0a2a83f487-logs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.465444 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdcb07fc-c984-417a-aecb-6f0a2a83f487-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.465613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.465771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-scripts\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.569386 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdcb07fc-c984-417a-aecb-6f0a2a83f487-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.569489 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdcb07fc-c984-417a-aecb-6f0a2a83f487-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.569595 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.569814 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-scripts\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.569996 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.570104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.570208 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.570298 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhc48\" (UniqueName: \"kubernetes.io/projected/bdcb07fc-c984-417a-aecb-6f0a2a83f487-kube-api-access-rhc48\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.570471 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.571517 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdcb07fc-c984-417a-aecb-6f0a2a83f487-logs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.570659 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bdcb07fc-c984-417a-aecb-6f0a2a83f487-logs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.589102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data-custom\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.589393 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.589824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-scripts\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.591201 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.592559 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.593108 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.597043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb07fc-c984-417a-aecb-6f0a2a83f487-config-data\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.600817 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhc48\" (UniqueName: \"kubernetes.io/projected/bdcb07fc-c984-417a-aecb-6f0a2a83f487-kube-api-access-rhc48\") pod \"cinder-api-0\" (UID: \"bdcb07fc-c984-417a-aecb-6f0a2a83f487\") " pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.706511 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.826579 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8685d8b6-46cdt"] Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.830608 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.838897 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8685d8b6-46cdt"] Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.849427 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.849940 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.979980 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170f68c0-a435-4022-8c3b-82f60b06fbac-logs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980056 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-internal-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980106 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data-custom\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980189 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-combined-ca-bundle\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqx7s\" (UniqueName: \"kubernetes.io/projected/170f68c0-a435-4022-8c3b-82f60b06fbac-kube-api-access-gqx7s\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980299 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-public-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.980346 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:00 crc kubenswrapper[4869]: I0106 14:19:00.998377 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerStarted","Data":"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8"} Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082234 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170f68c0-a435-4022-8c3b-82f60b06fbac-logs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082311 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-internal-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082369 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data-custom\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082454 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-combined-ca-bundle\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082512 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqx7s\" (UniqueName: \"kubernetes.io/projected/170f68c0-a435-4022-8c3b-82f60b06fbac-kube-api-access-gqx7s\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082577 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-public-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.082602 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.086043 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170f68c0-a435-4022-8c3b-82f60b06fbac-logs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.098586 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-combined-ca-bundle\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.099199 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.102921 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-config-data-custom\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.103420 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-internal-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.105469 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170f68c0-a435-4022-8c3b-82f60b06fbac-public-tls-certs\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.118950 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqx7s\" (UniqueName: \"kubernetes.io/projected/170f68c0-a435-4022-8c3b-82f60b06fbac-kube-api-access-gqx7s\") pod \"barbican-api-8685d8b6-46cdt\" (UID: \"170f68c0-a435-4022-8c3b-82f60b06fbac\") " pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.173263 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.246446 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 06 14:19:01 crc kubenswrapper[4869]: W0106 14:19:01.253756 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcb07fc_c984_417a_aecb_6f0a2a83f487.slice/crio-d3c0f17c877ae5f32941ad22b28be63341afd29afd93ce9c55f94367d9066df4 WatchSource:0}: Error finding container d3c0f17c877ae5f32941ad22b28be63341afd29afd93ce9c55f94367d9066df4: Status 404 returned error can't find the container with id d3c0f17c877ae5f32941ad22b28be63341afd29afd93ce9c55f94367d9066df4 Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.620902 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8685d8b6-46cdt"] Jan 06 14:19:01 crc kubenswrapper[4869]: W0106 14:19:01.624416 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170f68c0_a435_4022_8c3b_82f60b06fbac.slice/crio-3f2f7f4349cb0b610c29c8beebb75ef36a1dcaed3d9055986500cc156afb64d1 WatchSource:0}: Error finding container 3f2f7f4349cb0b610c29c8beebb75ef36a1dcaed3d9055986500cc156afb64d1: Status 404 returned error can't find the container with id 3f2f7f4349cb0b610c29c8beebb75ef36a1dcaed3d9055986500cc156afb64d1 Jan 06 14:19:01 crc kubenswrapper[4869]: I0106 14:19:01.719799 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a881307-a568-4715-95d3-59aa91b69477" path="/var/lib/kubelet/pods/0a881307-a568-4715-95d3-59aa91b69477/volumes" Jan 06 14:19:02 crc kubenswrapper[4869]: I0106 14:19:02.037461 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdcb07fc-c984-417a-aecb-6f0a2a83f487","Type":"ContainerStarted","Data":"c77c2dd3db7bff5be633c68d4285b746e4ee841cfdcf22e108d6b6d24c3539d1"} Jan 06 14:19:02 crc kubenswrapper[4869]: I0106 14:19:02.039029 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdcb07fc-c984-417a-aecb-6f0a2a83f487","Type":"ContainerStarted","Data":"d3c0f17c877ae5f32941ad22b28be63341afd29afd93ce9c55f94367d9066df4"} Jan 06 14:19:02 crc kubenswrapper[4869]: I0106 14:19:02.041456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8685d8b6-46cdt" event={"ID":"170f68c0-a435-4022-8c3b-82f60b06fbac","Type":"ContainerStarted","Data":"3fc479d733c9d58fabb96c65979f899d4b12ce08f0ae9fee85065f62456fce39"} Jan 06 14:19:02 crc kubenswrapper[4869]: I0106 14:19:02.041627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8685d8b6-46cdt" event={"ID":"170f68c0-a435-4022-8c3b-82f60b06fbac","Type":"ContainerStarted","Data":"3f2f7f4349cb0b610c29c8beebb75ef36a1dcaed3d9055986500cc156afb64d1"} Jan 06 14:19:02 crc kubenswrapper[4869]: I0106 14:19:02.082897 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerStarted","Data":"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c"} Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.098318 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bdcb07fc-c984-417a-aecb-6f0a2a83f487","Type":"ContainerStarted","Data":"cb5d6d4e816ec86f8044a21042712cf2d855808ba5b4e983a844401bd8897880"} Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.099345 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.110477 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8685d8b6-46cdt" event={"ID":"170f68c0-a435-4022-8c3b-82f60b06fbac","Type":"ContainerStarted","Data":"ef31e34e1aefc36bf9eec6a47efe68828c20c55c51d6194b0a00f368669afbcd"} Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.110617 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.110641 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.113544 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerStarted","Data":"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027"} Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.114318 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.122903 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.122880202 podStartE2EDuration="3.122880202s" podCreationTimestamp="2026-01-06 14:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:03.119078807 +0000 UTC m=+1161.658766471" watchObservedRunningTime="2026-01-06 14:19:03.122880202 +0000 UTC m=+1161.662567876" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.153916 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.71748747 podStartE2EDuration="6.153897221s" podCreationTimestamp="2026-01-06 14:18:57 +0000 UTC" firstStartedPulling="2026-01-06 14:18:58.819396981 +0000 UTC m=+1157.359084645" lastFinishedPulling="2026-01-06 14:19:02.255806732 +0000 UTC m=+1160.795494396" observedRunningTime="2026-01-06 14:19:03.148338983 +0000 UTC m=+1161.688026657" watchObservedRunningTime="2026-01-06 14:19:03.153897221 +0000 UTC m=+1161.693584885" Jan 06 14:19:03 crc kubenswrapper[4869]: I0106 14:19:03.168654 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8685d8b6-46cdt" podStartSLOduration=3.168629917 podStartE2EDuration="3.168629917s" podCreationTimestamp="2026-01-06 14:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:03.164889934 +0000 UTC m=+1161.704577608" watchObservedRunningTime="2026-01-06 14:19:03.168629917 +0000 UTC m=+1161.708317581" Jan 06 14:19:05 crc kubenswrapper[4869]: I0106 14:19:05.668403 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:19:05 crc kubenswrapper[4869]: I0106 14:19:05.755793 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:19:05 crc kubenswrapper[4869]: I0106 14:19:05.756283 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="dnsmasq-dns" containerID="cri-o://27d83b31ce532544649af4ae7eca190afd86cf38960ba185596e9dec94f512dc" gracePeriod=10 Jan 06 14:19:05 crc kubenswrapper[4869]: I0106 14:19:05.929606 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 06 14:19:05 crc kubenswrapper[4869]: I0106 14:19:05.991490 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.142176 4869 generic.go:334] "Generic (PLEG): container finished" podID="be800df2-784f-45eb-b280-81679e58eb7a" containerID="27d83b31ce532544649af4ae7eca190afd86cf38960ba185596e9dec94f512dc" exitCode=0 Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.142577 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="cinder-scheduler" containerID="cri-o://24084d3956b81c4f957adecf76348045d76c5cfa6d153bd1f2b28e981c44cc6e" gracePeriod=30 Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.142681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" event={"ID":"be800df2-784f-45eb-b280-81679e58eb7a","Type":"ContainerDied","Data":"27d83b31ce532544649af4ae7eca190afd86cf38960ba185596e9dec94f512dc"} Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.142821 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="probe" containerID="cri-o://2fc96364fbc6620e499bcb6b8edd7889210c92ac1568d0bdf1d06ee388fbc189" gracePeriod=30 Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.189200 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.274859 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.278494 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.386445 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb\") pod \"be800df2-784f-45eb-b280-81679e58eb7a\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.386781 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc\") pod \"be800df2-784f-45eb-b280-81679e58eb7a\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.386855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config\") pod \"be800df2-784f-45eb-b280-81679e58eb7a\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.386893 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb\") pod \"be800df2-784f-45eb-b280-81679e58eb7a\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.386963 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rl9\" (UniqueName: \"kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9\") pod \"be800df2-784f-45eb-b280-81679e58eb7a\" (UID: \"be800df2-784f-45eb-b280-81679e58eb7a\") " Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.392061 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9" (OuterVolumeSpecName: "kube-api-access-p4rl9") pod "be800df2-784f-45eb-b280-81679e58eb7a" (UID: "be800df2-784f-45eb-b280-81679e58eb7a"). InnerVolumeSpecName "kube-api-access-p4rl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.435614 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be800df2-784f-45eb-b280-81679e58eb7a" (UID: "be800df2-784f-45eb-b280-81679e58eb7a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.449034 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config" (OuterVolumeSpecName: "config") pod "be800df2-784f-45eb-b280-81679e58eb7a" (UID: "be800df2-784f-45eb-b280-81679e58eb7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.464417 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be800df2-784f-45eb-b280-81679e58eb7a" (UID: "be800df2-784f-45eb-b280-81679e58eb7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.466719 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be800df2-784f-45eb-b280-81679e58eb7a" (UID: "be800df2-784f-45eb-b280-81679e58eb7a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.489016 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rl9\" (UniqueName: \"kubernetes.io/projected/be800df2-784f-45eb-b280-81679e58eb7a-kube-api-access-p4rl9\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.489072 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.489104 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.489132 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:06 crc kubenswrapper[4869]: I0106 14:19:06.489141 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be800df2-784f-45eb-b280-81679e58eb7a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.152992 4869 generic.go:334] "Generic (PLEG): container finished" podID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerID="2fc96364fbc6620e499bcb6b8edd7889210c92ac1568d0bdf1d06ee388fbc189" exitCode=0 Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.153279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerDied","Data":"2fc96364fbc6620e499bcb6b8edd7889210c92ac1568d0bdf1d06ee388fbc189"} Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.155203 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.155256 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-f4ct6" event={"ID":"be800df2-784f-45eb-b280-81679e58eb7a","Type":"ContainerDied","Data":"b743e824761432e2e9a6432c4d8f8557378203d90da27099cd0b5a9a38a92f0a"} Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.155285 4869 scope.go:117] "RemoveContainer" containerID="27d83b31ce532544649af4ae7eca190afd86cf38960ba185596e9dec94f512dc" Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.200175 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.201305 4869 scope.go:117] "RemoveContainer" containerID="2f0fdecc92ed490e106f96f1120436f80636b21ba425d5baab810ad0fa60e0aa" Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.213778 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-f4ct6"] Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.715676 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be800df2-784f-45eb-b280-81679e58eb7a" path="/var/lib/kubelet/pods/be800df2-784f-45eb-b280-81679e58eb7a/volumes" Jan 06 14:19:07 crc kubenswrapper[4869]: I0106 14:19:07.886267 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:08 crc kubenswrapper[4869]: I0106 14:19:08.912404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.308628 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8685d8b6-46cdt" Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.368804 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.369078 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api-log" containerID="cri-o://68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf" gracePeriod=30 Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.369166 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api" containerID="cri-o://9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424" gracePeriod=30 Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.812190 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:19:09 crc kubenswrapper[4869]: I0106 14:19:09.812710 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-749bc7d596-scpc9" Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.185687 4869 generic.go:334] "Generic (PLEG): container finished" podID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerID="24084d3956b81c4f957adecf76348045d76c5cfa6d153bd1f2b28e981c44cc6e" exitCode=0 Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.185752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerDied","Data":"24084d3956b81c4f957adecf76348045d76c5cfa6d153bd1f2b28e981c44cc6e"} Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.187791 4869 generic.go:334] "Generic (PLEG): container finished" podID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerID="68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf" exitCode=143 Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.187825 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerDied","Data":"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf"} Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.303113 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5df48645c5-c7ccn" Jan 06 14:19:10 crc kubenswrapper[4869]: I0106 14:19:10.935440 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117427 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsfch\" (UniqueName: \"kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117495 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117540 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117564 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.117742 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom\") pod \"72361b1f-122a-46d2-9acc-a0ccdb892326\" (UID: \"72361b1f-122a-46d2-9acc-a0ccdb892326\") " Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.118563 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.123317 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-77f9b5db4f-c4t9m" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.124262 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts" (OuterVolumeSpecName: "scripts") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.124658 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch" (OuterVolumeSpecName: "kube-api-access-bsfch") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "kube-api-access-bsfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.124768 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.201942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.220729 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.220764 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.220774 4869 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/72361b1f-122a-46d2-9acc-a0ccdb892326-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.220784 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.220792 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsfch\" (UniqueName: \"kubernetes.io/projected/72361b1f-122a-46d2-9acc-a0ccdb892326-kube-api-access-bsfch\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.231027 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.231334 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dff58f544-954n8" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-api" containerID="cri-o://a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e" gracePeriod=30 Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.231774 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dff58f544-954n8" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-httpd" containerID="cri-o://afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6" gracePeriod=30 Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.232250 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.232206 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"72361b1f-122a-46d2-9acc-a0ccdb892326","Type":"ContainerDied","Data":"6b705cd327ed6a3496b353979e847f4e50dc019b0161b66aea3274d2d6e5d90d"} Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.232984 4869 scope.go:117] "RemoveContainer" containerID="2fc96364fbc6620e499bcb6b8edd7889210c92ac1568d0bdf1d06ee388fbc189" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.302442 4869 scope.go:117] "RemoveContainer" containerID="24084d3956b81c4f957adecf76348045d76c5cfa6d153bd1f2b28e981c44cc6e" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.309847 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data" (OuterVolumeSpecName: "config-data") pod "72361b1f-122a-46d2-9acc-a0ccdb892326" (UID: "72361b1f-122a-46d2-9acc-a0ccdb892326"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.323026 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72361b1f-122a-46d2-9acc-a0ccdb892326-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.562032 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.571637 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.582914 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:11 crc kubenswrapper[4869]: E0106 14:19:11.583302 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="cinder-scheduler" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583326 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="cinder-scheduler" Jan 06 14:19:11 crc kubenswrapper[4869]: E0106 14:19:11.583348 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="dnsmasq-dns" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583354 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="dnsmasq-dns" Jan 06 14:19:11 crc kubenswrapper[4869]: E0106 14:19:11.583365 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="probe" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583372 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="probe" Jan 06 14:19:11 crc kubenswrapper[4869]: E0106 14:19:11.583382 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="init" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583388 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="init" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583567 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="cinder-scheduler" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583590 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="be800df2-784f-45eb-b280-81679e58eb7a" containerName="dnsmasq-dns" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.583602 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" containerName="probe" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.584500 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.586787 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.602451 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.728756 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72361b1f-122a-46d2-9acc-a0ccdb892326" path="/var/lib/kubelet/pods/72361b1f-122a-46d2-9acc-a0ccdb892326/volumes" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.732810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.732884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.732916 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhr94\" (UniqueName: \"kubernetes.io/projected/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-kube-api-access-jhr94\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.737251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.738783 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.738825 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840608 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840697 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840724 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.840850 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhr94\" (UniqueName: \"kubernetes.io/projected/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-kube-api-access-jhr94\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.841615 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.844731 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.845080 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.848070 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.849441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.876198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhr94\" (UniqueName: \"kubernetes.io/projected/0d05e9f4-29bf-4c4b-8930-7346c2f4b33d-kube-api-access-jhr94\") pod \"cinder-scheduler-0\" (UID: \"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d\") " pod="openstack/cinder-scheduler-0" Jan 06 14:19:11 crc kubenswrapper[4869]: I0106 14:19:11.938238 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 06 14:19:12 crc kubenswrapper[4869]: I0106 14:19:12.248483 4869 generic.go:334] "Generic (PLEG): container finished" podID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerID="afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6" exitCode=0 Jan 06 14:19:12 crc kubenswrapper[4869]: I0106 14:19:12.248640 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerDied","Data":"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6"} Jan 06 14:19:12 crc kubenswrapper[4869]: I0106 14:19:12.427128 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 06 14:19:12 crc kubenswrapper[4869]: W0106 14:19:12.444813 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d05e9f4_29bf_4c4b_8930_7346c2f4b33d.slice/crio-45aa166394c08cd739426c44891d7c23b8dec50cf60ab3dca4f9cd1a63643b3b WatchSource:0}: Error finding container 45aa166394c08cd739426c44891d7c23b8dec50cf60ab3dca4f9cd1a63643b3b: Status 404 returned error can't find the container with id 45aa166394c08cd739426c44891d7c23b8dec50cf60ab3dca4f9cd1a63643b3b Jan 06 14:19:12 crc kubenswrapper[4869]: I0106 14:19:12.600074 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:58736->10.217.0.148:9311: read: connection reset by peer" Jan 06 14:19:12 crc kubenswrapper[4869]: I0106 14:19:12.600085 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.148:9311/healthcheck\": read tcp 10.217.0.2:58732->10.217.0.148:9311: read: connection reset by peer" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.153476 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.154798 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.155179 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api-log" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.155199 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api-log" Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.155214 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.155220 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.155369 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api-log" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.155389 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerName="barbican-api" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.156130 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.157605 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.158775 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.164118 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-28sr7" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.185957 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.270979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d","Type":"ContainerStarted","Data":"45aa166394c08cd739426c44891d7c23b8dec50cf60ab3dca4f9cd1a63643b3b"} Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273421 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom\") pod \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs\") pod \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273507 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle\") pod \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273588 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data\") pod \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273605 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4btvp\" (UniqueName: \"kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp\") pod \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\" (UID: \"3f909e77-b5e4-46b0-b0b1-246b9fde7b73\") " Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273882 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.273938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpzr\" (UniqueName: \"kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274073 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs" (OuterVolumeSpecName: "logs") pod "3f909e77-b5e4-46b0-b0b1-246b9fde7b73" (UID: "3f909e77-b5e4-46b0-b0b1-246b9fde7b73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274358 4869 generic.go:334] "Generic (PLEG): container finished" podID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" containerID="9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424" exitCode=0 Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274388 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerDied","Data":"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424"} Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274409 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" event={"ID":"3f909e77-b5e4-46b0-b0b1-246b9fde7b73","Type":"ContainerDied","Data":"cf88e1d3323ce94a910bad03595b42c2f19bde0b14ef6750a6e1b7dbf27a6493"} Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274426 4869 scope.go:117] "RemoveContainer" containerID="9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.274537 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbc78c6fb-zx9xt" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.280278 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f909e77-b5e4-46b0-b0b1-246b9fde7b73" (UID: "3f909e77-b5e4-46b0-b0b1-246b9fde7b73"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.280622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp" (OuterVolumeSpecName: "kube-api-access-4btvp") pod "3f909e77-b5e4-46b0-b0b1-246b9fde7b73" (UID: "3f909e77-b5e4-46b0-b0b1-246b9fde7b73"). InnerVolumeSpecName "kube-api-access-4btvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.302776 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f909e77-b5e4-46b0-b0b1-246b9fde7b73" (UID: "3f909e77-b5e4-46b0-b0b1-246b9fde7b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.331564 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data" (OuterVolumeSpecName: "config-data") pod "3f909e77-b5e4-46b0-b0b1-246b9fde7b73" (UID: "3f909e77-b5e4-46b0-b0b1-246b9fde7b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375584 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375646 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpzr\" (UniqueName: \"kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375737 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375752 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4btvp\" (UniqueName: \"kubernetes.io/projected/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-kube-api-access-4btvp\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375765 4869 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375774 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.375782 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f909e77-b5e4-46b0-b0b1-246b9fde7b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.380102 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.382905 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.384444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.392862 4869 scope.go:117] "RemoveContainer" containerID="68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.401318 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpzr\" (UniqueName: \"kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr\") pod \"openstackclient\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.433259 4869 scope.go:117] "RemoveContainer" containerID="9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424" Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.434014 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424\": container with ID starting with 9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424 not found: ID does not exist" containerID="9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.434057 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424"} err="failed to get container status \"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424\": rpc error: code = NotFound desc = could not find container \"9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424\": container with ID starting with 9c9c7f12fc0d3f22247723a7d6a767df308c09b8fd911ae69d2cc364c0b6f424 not found: ID does not exist" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.434081 4869 scope.go:117] "RemoveContainer" containerID="68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf" Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.437912 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf\": container with ID starting with 68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf not found: ID does not exist" containerID="68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.437950 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf"} err="failed to get container status \"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf\": rpc error: code = NotFound desc = could not find container \"68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf\": container with ID starting with 68fa3037263ef13431d633903b0a9ba84a0c4545ff4e451b987240f0ad644aaf not found: ID does not exist" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.479210 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.490068 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.537707 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.571819 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.573058 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.605626 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.641968 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.650719 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cbc78c6fb-zx9xt"] Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.675361 4869 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 06 14:19:13 crc kubenswrapper[4869]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4c2820db-156a-4ad5-96b7-26f14d172e95_0(36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293" Netns:"/var/run/netns/27b272d8-51d0-4bd2-bcb2-7484fe468485" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293;K8S_POD_UID=4c2820db-156a-4ad5-96b7-26f14d172e95" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4c2820db-156a-4ad5-96b7-26f14d172e95]: expected pod UID "4c2820db-156a-4ad5-96b7-26f14d172e95" but got "368ebbb8-5558-42d5-a18d-516ff3e623bf" from Kube API Jan 06 14:19:13 crc kubenswrapper[4869]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 06 14:19:13 crc kubenswrapper[4869]: > Jan 06 14:19:13 crc kubenswrapper[4869]: E0106 14:19:13.675650 4869 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 06 14:19:13 crc kubenswrapper[4869]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_4c2820db-156a-4ad5-96b7-26f14d172e95_0(36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293" Netns:"/var/run/netns/27b272d8-51d0-4bd2-bcb2-7484fe468485" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=36417adc94224a79f9d88ef57dfa7618403611543778b8980e66e336ae891293;K8S_POD_UID=4c2820db-156a-4ad5-96b7-26f14d172e95" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/4c2820db-156a-4ad5-96b7-26f14d172e95]: expected pod UID "4c2820db-156a-4ad5-96b7-26f14d172e95" but got "368ebbb8-5558-42d5-a18d-516ff3e623bf" from Kube API Jan 06 14:19:13 crc kubenswrapper[4869]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 06 14:19:13 crc kubenswrapper[4869]: > pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.684142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config-secret\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.684361 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.684511 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.684637 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rbkx\" (UniqueName: \"kubernetes.io/projected/368ebbb8-5558-42d5-a18d-516ff3e623bf-kube-api-access-7rbkx\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.719788 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f909e77-b5e4-46b0-b0b1-246b9fde7b73" path="/var/lib/kubelet/pods/3f909e77-b5e4-46b0-b0b1-246b9fde7b73/volumes" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.759892 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.800479 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config-secret\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.800549 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.800707 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.800821 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rbkx\" (UniqueName: \"kubernetes.io/projected/368ebbb8-5558-42d5-a18d-516ff3e623bf-kube-api-access-7rbkx\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.802303 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.805086 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-openstack-config-secret\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.807387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/368ebbb8-5558-42d5-a18d-516ff3e623bf-combined-ca-bundle\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.823105 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rbkx\" (UniqueName: \"kubernetes.io/projected/368ebbb8-5558-42d5-a18d-516ff3e623bf-kube-api-access-7rbkx\") pod \"openstackclient\" (UID: \"368ebbb8-5558-42d5-a18d-516ff3e623bf\") " pod="openstack/openstackclient" Jan 06 14:19:13 crc kubenswrapper[4869]: I0106 14:19:13.921286 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.287085 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d","Type":"ContainerStarted","Data":"e4153edfc1c1b7a65a9a9df9718d43047b5dd314910b898e597a4a5f228addfd"} Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.287429 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d05e9f4-29bf-4c4b-8930-7346c2f4b33d","Type":"ContainerStarted","Data":"4121d37075c84bf0d8078ecb17f297c6c4f0354fd55f159800bacc7fbd554630"} Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.288499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.296809 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.312194 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4c2820db-156a-4ad5-96b7-26f14d172e95" podUID="368ebbb8-5558-42d5-a18d-516ff3e623bf" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.313210 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.313186738 podStartE2EDuration="3.313186738s" podCreationTimestamp="2026-01-06 14:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:14.305457387 +0000 UTC m=+1172.845145051" watchObservedRunningTime="2026-01-06 14:19:14.313186738 +0000 UTC m=+1172.852874392" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.418048 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret\") pod \"4c2820db-156a-4ad5-96b7-26f14d172e95\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.418201 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fpzr\" (UniqueName: \"kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr\") pod \"4c2820db-156a-4ad5-96b7-26f14d172e95\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.418222 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle\") pod \"4c2820db-156a-4ad5-96b7-26f14d172e95\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.418291 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config\") pod \"4c2820db-156a-4ad5-96b7-26f14d172e95\" (UID: \"4c2820db-156a-4ad5-96b7-26f14d172e95\") " Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.418899 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "4c2820db-156a-4ad5-96b7-26f14d172e95" (UID: "4c2820db-156a-4ad5-96b7-26f14d172e95"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.427559 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c2820db-156a-4ad5-96b7-26f14d172e95" (UID: "4c2820db-156a-4ad5-96b7-26f14d172e95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.434438 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "4c2820db-156a-4ad5-96b7-26f14d172e95" (UID: "4c2820db-156a-4ad5-96b7-26f14d172e95"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.435890 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr" (OuterVolumeSpecName: "kube-api-access-6fpzr") pod "4c2820db-156a-4ad5-96b7-26f14d172e95" (UID: "4c2820db-156a-4ad5-96b7-26f14d172e95"). InnerVolumeSpecName "kube-api-access-6fpzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.471515 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.520493 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fpzr\" (UniqueName: \"kubernetes.io/projected/4c2820db-156a-4ad5-96b7-26f14d172e95-kube-api-access-6fpzr\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.520531 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.520541 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:14 crc kubenswrapper[4869]: I0106 14:19:14.520551 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4c2820db-156a-4ad5-96b7-26f14d172e95-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.208500 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.297492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"368ebbb8-5558-42d5-a18d-516ff3e623bf","Type":"ContainerStarted","Data":"0cff2ea9053f5adc1e74a4924cecac009c5c0dd89c85af9960767c836e69051c"} Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300648 4869 generic.go:334] "Generic (PLEG): container finished" podID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerID="a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e" exitCode=0 Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300696 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerDied","Data":"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e"} Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dff58f544-954n8" event={"ID":"148c4ddd-2b85-4b45-bebc-fd77a7cb689e","Type":"ContainerDied","Data":"21bb003af501828f5e1674e0653111d2da527c894cb8d4c6a4b665eeb8aacfbd"} Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300752 4869 scope.go:117] "RemoveContainer" containerID="afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300752 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dff58f544-954n8" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.300900 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.314829 4869 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="4c2820db-156a-4ad5-96b7-26f14d172e95" podUID="368ebbb8-5558-42d5-a18d-516ff3e623bf" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.327932 4869 scope.go:117] "RemoveContainer" containerID="a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.332652 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config\") pod \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.332851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config\") pod \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.332925 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtt7d\" (UniqueName: \"kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d\") pod \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.332946 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle\") pod \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.333047 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs\") pod \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\" (UID: \"148c4ddd-2b85-4b45-bebc-fd77a7cb689e\") " Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.342973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d" (OuterVolumeSpecName: "kube-api-access-rtt7d") pod "148c4ddd-2b85-4b45-bebc-fd77a7cb689e" (UID: "148c4ddd-2b85-4b45-bebc-fd77a7cb689e"). InnerVolumeSpecName "kube-api-access-rtt7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.345796 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "148c4ddd-2b85-4b45-bebc-fd77a7cb689e" (UID: "148c4ddd-2b85-4b45-bebc-fd77a7cb689e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.387037 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config" (OuterVolumeSpecName: "config") pod "148c4ddd-2b85-4b45-bebc-fd77a7cb689e" (UID: "148c4ddd-2b85-4b45-bebc-fd77a7cb689e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.399183 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "148c4ddd-2b85-4b45-bebc-fd77a7cb689e" (UID: "148c4ddd-2b85-4b45-bebc-fd77a7cb689e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.414498 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "148c4ddd-2b85-4b45-bebc-fd77a7cb689e" (UID: "148c4ddd-2b85-4b45-bebc-fd77a7cb689e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.435796 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtt7d\" (UniqueName: \"kubernetes.io/projected/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-kube-api-access-rtt7d\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.435829 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.435838 4869 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.435847 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.435855 4869 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/148c4ddd-2b85-4b45-bebc-fd77a7cb689e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.451934 4869 scope.go:117] "RemoveContainer" containerID="afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6" Jan 06 14:19:15 crc kubenswrapper[4869]: E0106 14:19:15.452328 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6\": container with ID starting with afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6 not found: ID does not exist" containerID="afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.452364 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6"} err="failed to get container status \"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6\": rpc error: code = NotFound desc = could not find container \"afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6\": container with ID starting with afa0f4fd02bb5c02c1291ce6c94431d9c3ee85489a52c7a5ad4042527db1b6e6 not found: ID does not exist" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.452384 4869 scope.go:117] "RemoveContainer" containerID="a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e" Jan 06 14:19:15 crc kubenswrapper[4869]: E0106 14:19:15.452635 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e\": container with ID starting with a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e not found: ID does not exist" containerID="a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.452692 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e"} err="failed to get container status \"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e\": rpc error: code = NotFound desc = could not find container \"a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e\": container with ID starting with a81cc452c0a26e1ef54b6d1727b47412cfd75dddbd47425dfa68729aa725da1e not found: ID does not exist" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.633584 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.641889 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dff58f544-954n8"] Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.716513 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" path="/var/lib/kubelet/pods/148c4ddd-2b85-4b45-bebc-fd77a7cb689e/volumes" Jan 06 14:19:15 crc kubenswrapper[4869]: I0106 14:19:15.717251 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2820db-156a-4ad5-96b7-26f14d172e95" path="/var/lib/kubelet/pods/4c2820db-156a-4ad5-96b7-26f14d172e95/volumes" Jan 06 14:19:16 crc kubenswrapper[4869]: I0106 14:19:16.938923 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.756218 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q9kvc"] Jan 06 14:19:18 crc kubenswrapper[4869]: E0106 14:19:18.758628 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-api" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.758655 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-api" Jan 06 14:19:18 crc kubenswrapper[4869]: E0106 14:19:18.758681 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-httpd" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.758688 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-httpd" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.762805 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-httpd" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.762883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="148c4ddd-2b85-4b45-bebc-fd77a7cb689e" containerName="neutron-api" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.763800 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.770896 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q9kvc"] Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.866345 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-thdfz"] Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.868636 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.876755 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-thdfz"] Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.910249 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5b8k\" (UniqueName: \"kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.910432 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.965742 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xp2nl"] Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.967021 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.978310 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6e3c-account-create-update-4bhj5"] Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.979501 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.982211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 06 14:19:18 crc kubenswrapper[4869]: I0106 14:19:18.988954 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xp2nl"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.010282 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6e3c-account-create-update-4bhj5"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.011742 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.011914 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5b8k\" (UniqueName: \"kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.011991 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntxl\" (UniqueName: \"kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.012104 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.012657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.047566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5b8k\" (UniqueName: \"kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k\") pod \"nova-api-db-create-q9kvc\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.113741 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114046 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cklj4\" (UniqueName: \"kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114085 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114115 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54xw6\" (UniqueName: \"kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114244 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114282 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sntxl\" (UniqueName: \"kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.114837 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.115247 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.148742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sntxl\" (UniqueName: \"kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl\") pod \"nova-cell0-db-create-thdfz\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.182622 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-97dc-account-create-update-sslwl"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.183789 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.187322 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.195049 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-97dc-account-create-update-sslwl"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.198911 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.215478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.215543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cklj4\" (UniqueName: \"kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.215576 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54xw6\" (UniqueName: \"kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.215657 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.216336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.216854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.239472 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54xw6\" (UniqueName: \"kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6\") pod \"nova-api-6e3c-account-create-update-4bhj5\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.241757 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cklj4\" (UniqueName: \"kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4\") pod \"nova-cell1-db-create-xp2nl\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.284415 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.311330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.317633 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v26vs\" (UniqueName: \"kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.317843 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.365682 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1885-account-create-update-7g8w2"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.366959 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.369868 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.397913 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1885-account-create-update-7g8w2"] Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.419611 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.419713 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v26vs\" (UniqueName: \"kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.420573 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.436272 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v26vs\" (UniqueName: \"kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs\") pod \"nova-cell0-97dc-account-create-update-sslwl\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.509085 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.524855 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.524913 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8qsh\" (UniqueName: \"kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.626376 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.626446 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8qsh\" (UniqueName: \"kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.627133 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.643912 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8qsh\" (UniqueName: \"kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh\") pod \"nova-cell1-1885-account-create-update-7g8w2\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:19 crc kubenswrapper[4869]: I0106 14:19:19.687944 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:22 crc kubenswrapper[4869]: I0106 14:19:22.200051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.322282 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-97dc-account-create-update-sslwl"] Jan 06 14:19:24 crc kubenswrapper[4869]: W0106 14:19:24.324865 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0088081d_47a5_4616_9c0a_36934cb45b2a.slice/crio-e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a WatchSource:0}: Error finding container e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a: Status 404 returned error can't find the container with id e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.369378 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6e3c-account-create-update-4bhj5"] Jan 06 14:19:24 crc kubenswrapper[4869]: W0106 14:19:24.370384 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0d99944_ef47_4a37_b27b_b68ee2aafa99.slice/crio-06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e WatchSource:0}: Error finding container 06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e: Status 404 returned error can't find the container with id 06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.394690 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" event={"ID":"a0d99944-ef47-4a37-b27b-b68ee2aafa99","Type":"ContainerStarted","Data":"06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e"} Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.398906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" event={"ID":"0088081d-47a5-4616-9c0a-36934cb45b2a","Type":"ContainerStarted","Data":"e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a"} Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.619249 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-thdfz"] Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.627380 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q9kvc"] Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.636609 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xp2nl"] Jan 06 14:19:24 crc kubenswrapper[4869]: W0106 14:19:24.643876 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93537664_809a_4f60_add8_bccd7a8b08a2.slice/crio-9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1 WatchSource:0}: Error finding container 9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1: Status 404 returned error can't find the container with id 9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1 Jan 06 14:19:24 crc kubenswrapper[4869]: I0106 14:19:24.765232 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1885-account-create-update-7g8w2"] Jan 06 14:19:24 crc kubenswrapper[4869]: W0106 14:19:24.769335 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a92a488_1e14_45e0_9dc7_c09605d26de5.slice/crio-ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6 WatchSource:0}: Error finding container ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6: Status 404 returned error can't find the container with id ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.409130 4869 generic.go:334] "Generic (PLEG): container finished" podID="cf1c9b2b-a06d-40c4-8471-246a2041fa96" containerID="c65ea9c730db9bb85ab78dd88ea1cfa73a6fc8938dcdb7ddaf8f980c174cd27a" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.409236 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q9kvc" event={"ID":"cf1c9b2b-a06d-40c4-8471-246a2041fa96","Type":"ContainerDied","Data":"c65ea9c730db9bb85ab78dd88ea1cfa73a6fc8938dcdb7ddaf8f980c174cd27a"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.409297 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q9kvc" event={"ID":"cf1c9b2b-a06d-40c4-8471-246a2041fa96","Type":"ContainerStarted","Data":"778f7c97e481a70d8ed7522a36ba214615f1a49c8194f3a7734835ba0f10432a"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.413884 4869 generic.go:334] "Generic (PLEG): container finished" podID="a0d99944-ef47-4a37-b27b-b68ee2aafa99" containerID="8d9c15e047eafd4e3e8115c4419a4acf8f37c630b04a0c0ca324731fc604bfb2" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.413966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" event={"ID":"a0d99944-ef47-4a37-b27b-b68ee2aafa99","Type":"ContainerDied","Data":"8d9c15e047eafd4e3e8115c4419a4acf8f37c630b04a0c0ca324731fc604bfb2"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.415541 4869 generic.go:334] "Generic (PLEG): container finished" podID="0088081d-47a5-4616-9c0a-36934cb45b2a" containerID="44c925c3007a9cd06e71f4720ecee87ba629f9b34b1dd6fa33ef19cdf866fc5f" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.415587 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" event={"ID":"0088081d-47a5-4616-9c0a-36934cb45b2a","Type":"ContainerDied","Data":"44c925c3007a9cd06e71f4720ecee87ba629f9b34b1dd6fa33ef19cdf866fc5f"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.417399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"368ebbb8-5558-42d5-a18d-516ff3e623bf","Type":"ContainerStarted","Data":"509a84a36569d083efcc1b051e3bca0d561ff85ae8e7737da649f414602409f6"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.420557 4869 generic.go:334] "Generic (PLEG): container finished" podID="3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" containerID="a442fd258a4312bec1ba9f69131fc2f167a7c27cf6edd41fc11a7aedeb798cd8" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.420683 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xp2nl" event={"ID":"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21","Type":"ContainerDied","Data":"a442fd258a4312bec1ba9f69131fc2f167a7c27cf6edd41fc11a7aedeb798cd8"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.420703 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xp2nl" event={"ID":"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21","Type":"ContainerStarted","Data":"69c0143ae825cd0b8e8dbfcf2afa3022fddc3ee4e69c0b69f589639c3eaac6bd"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.422746 4869 generic.go:334] "Generic (PLEG): container finished" podID="8a92a488-1e14-45e0-9dc7-c09605d26de5" containerID="0adbee660119e241e41e18d02fd623a28e5952ceafe45b08570787fa2f998f75" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.422816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" event={"ID":"8a92a488-1e14-45e0-9dc7-c09605d26de5","Type":"ContainerDied","Data":"0adbee660119e241e41e18d02fd623a28e5952ceafe45b08570787fa2f998f75"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.422833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" event={"ID":"8a92a488-1e14-45e0-9dc7-c09605d26de5","Type":"ContainerStarted","Data":"ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.424576 4869 generic.go:334] "Generic (PLEG): container finished" podID="93537664-809a-4f60-add8-bccd7a8b08a2" containerID="5218efbc4fe6ae2dc7deeadb2e4be6dd7f1e14ce709e74c0435437d8a7480171" exitCode=0 Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.424627 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-thdfz" event={"ID":"93537664-809a-4f60-add8-bccd7a8b08a2","Type":"ContainerDied","Data":"5218efbc4fe6ae2dc7deeadb2e4be6dd7f1e14ce709e74c0435437d8a7480171"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.424719 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-thdfz" event={"ID":"93537664-809a-4f60-add8-bccd7a8b08a2","Type":"ContainerStarted","Data":"9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1"} Jan 06 14:19:25 crc kubenswrapper[4869]: I0106 14:19:25.479250 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.955779727 podStartE2EDuration="12.479221894s" podCreationTimestamp="2026-01-06 14:19:13 +0000 UTC" firstStartedPulling="2026-01-06 14:19:14.478611532 +0000 UTC m=+1173.018299196" lastFinishedPulling="2026-01-06 14:19:24.002053699 +0000 UTC m=+1182.541741363" observedRunningTime="2026-01-06 14:19:25.472899848 +0000 UTC m=+1184.012587522" watchObservedRunningTime="2026-01-06 14:19:25.479221894 +0000 UTC m=+1184.018909568" Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.174627 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.175371 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-central-agent" containerID="cri-o://c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" gracePeriod=30 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.175501 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-notification-agent" containerID="cri-o://c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" gracePeriod=30 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.175502 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="sg-core" containerID="cri-o://ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" gracePeriod=30 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.175513 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="proxy-httpd" containerID="cri-o://ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" gracePeriod=30 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.193776 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.152:3000/\": EOF" Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.450265 4869 generic.go:334] "Generic (PLEG): container finished" podID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerID="ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" exitCode=0 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.450314 4869 generic.go:334] "Generic (PLEG): container finished" podID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerID="ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" exitCode=2 Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.451966 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerDied","Data":"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027"} Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.452033 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerDied","Data":"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c"} Jan 06 14:19:26 crc kubenswrapper[4869]: I0106 14:19:26.930719 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.069550 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.086523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v26vs\" (UniqueName: \"kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs\") pod \"0088081d-47a5-4616-9c0a-36934cb45b2a\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.086714 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts\") pod \"0088081d-47a5-4616-9c0a-36934cb45b2a\" (UID: \"0088081d-47a5-4616-9c0a-36934cb45b2a\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.087832 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0088081d-47a5-4616-9c0a-36934cb45b2a" (UID: "0088081d-47a5-4616-9c0a-36934cb45b2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.096239 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs" (OuterVolumeSpecName: "kube-api-access-v26vs") pod "0088081d-47a5-4616-9c0a-36934cb45b2a" (UID: "0088081d-47a5-4616-9c0a-36934cb45b2a"). InnerVolumeSpecName "kube-api-access-v26vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.104789 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.127370 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.141010 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.155491 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.188988 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts\") pod \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.189347 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts\") pod \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.189534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5b8k\" (UniqueName: \"kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k\") pod \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\" (UID: \"cf1c9b2b-a06d-40c4-8471-246a2041fa96\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.189848 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cf1c9b2b-a06d-40c4-8471-246a2041fa96" (UID: "cf1c9b2b-a06d-40c4-8471-246a2041fa96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.190399 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a0d99944-ef47-4a37-b27b-b68ee2aafa99" (UID: "a0d99944-ef47-4a37-b27b-b68ee2aafa99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.190507 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54xw6\" (UniqueName: \"kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6\") pod \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\" (UID: \"a0d99944-ef47-4a37-b27b-b68ee2aafa99\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.191136 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0088081d-47a5-4616-9c0a-36934cb45b2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.191162 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a0d99944-ef47-4a37-b27b-b68ee2aafa99-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.191173 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v26vs\" (UniqueName: \"kubernetes.io/projected/0088081d-47a5-4616-9c0a-36934cb45b2a-kube-api-access-v26vs\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.191185 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cf1c9b2b-a06d-40c4-8471-246a2041fa96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.193983 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k" (OuterVolumeSpecName: "kube-api-access-v5b8k") pod "cf1c9b2b-a06d-40c4-8471-246a2041fa96" (UID: "cf1c9b2b-a06d-40c4-8471-246a2041fa96"). InnerVolumeSpecName "kube-api-access-v5b8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.194956 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6" (OuterVolumeSpecName: "kube-api-access-54xw6") pod "a0d99944-ef47-4a37-b27b-b68ee2aafa99" (UID: "a0d99944-ef47-4a37-b27b-b68ee2aafa99"). InnerVolumeSpecName "kube-api-access-54xw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294184 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8qsh\" (UniqueName: \"kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh\") pod \"8a92a488-1e14-45e0-9dc7-c09605d26de5\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294312 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sntxl\" (UniqueName: \"kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl\") pod \"93537664-809a-4f60-add8-bccd7a8b08a2\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts\") pod \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294401 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts\") pod \"8a92a488-1e14-45e0-9dc7-c09605d26de5\" (UID: \"8a92a488-1e14-45e0-9dc7-c09605d26de5\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294439 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts\") pod \"93537664-809a-4f60-add8-bccd7a8b08a2\" (UID: \"93537664-809a-4f60-add8-bccd7a8b08a2\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.294491 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cklj4\" (UniqueName: \"kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4\") pod \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\" (UID: \"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.295156 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5b8k\" (UniqueName: \"kubernetes.io/projected/cf1c9b2b-a06d-40c4-8471-246a2041fa96-kube-api-access-v5b8k\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.295182 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54xw6\" (UniqueName: \"kubernetes.io/projected/a0d99944-ef47-4a37-b27b-b68ee2aafa99-kube-api-access-54xw6\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.295873 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.295970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a92a488-1e14-45e0-9dc7-c09605d26de5" (UID: "8a92a488-1e14-45e0-9dc7-c09605d26de5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.296443 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "93537664-809a-4f60-add8-bccd7a8b08a2" (UID: "93537664-809a-4f60-add8-bccd7a8b08a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.298625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" (UID: "3f95a5bc-df02-4c08-bd3d-fbc4faa9db21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.302832 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl" (OuterVolumeSpecName: "kube-api-access-sntxl") pod "93537664-809a-4f60-add8-bccd7a8b08a2" (UID: "93537664-809a-4f60-add8-bccd7a8b08a2"). InnerVolumeSpecName "kube-api-access-sntxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.306399 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh" (OuterVolumeSpecName: "kube-api-access-z8qsh") pod "8a92a488-1e14-45e0-9dc7-c09605d26de5" (UID: "8a92a488-1e14-45e0-9dc7-c09605d26de5"). InnerVolumeSpecName "kube-api-access-z8qsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.309802 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4" (OuterVolumeSpecName: "kube-api-access-cklj4") pod "3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" (UID: "3f95a5bc-df02-4c08-bd3d-fbc4faa9db21"). InnerVolumeSpecName "kube-api-access-cklj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.396601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.396648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.396917 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.396982 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.397044 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwmjr\" (UniqueName: \"kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.397147 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.397267 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts\") pod \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\" (UID: \"bc37f611-b36d-45d4-9434-9b8ca3e83efb\") " Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.397562 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.397767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398462 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a92a488-1e14-45e0-9dc7-c09605d26de5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398491 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398505 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc37f611-b36d-45d4-9434-9b8ca3e83efb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398521 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/93537664-809a-4f60-add8-bccd7a8b08a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398534 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cklj4\" (UniqueName: \"kubernetes.io/projected/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-kube-api-access-cklj4\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398552 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8qsh\" (UniqueName: \"kubernetes.io/projected/8a92a488-1e14-45e0-9dc7-c09605d26de5-kube-api-access-z8qsh\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398564 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sntxl\" (UniqueName: \"kubernetes.io/projected/93537664-809a-4f60-add8-bccd7a8b08a2-kube-api-access-sntxl\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.398577 4869 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.400879 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts" (OuterVolumeSpecName: "scripts") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.401021 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr" (OuterVolumeSpecName: "kube-api-access-nwmjr") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "kube-api-access-nwmjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.435227 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.463224 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" event={"ID":"8a92a488-1e14-45e0-9dc7-c09605d26de5","Type":"ContainerDied","Data":"ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.463268 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea84a561fcea5f65cbfadcfc7e12a4b7fc8c80f8530926acb7728d63e9f181e6" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.463319 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1885-account-create-update-7g8w2" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.467767 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-thdfz" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.467761 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-thdfz" event={"ID":"93537664-809a-4f60-add8-bccd7a8b08a2","Type":"ContainerDied","Data":"9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.468009 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3292d8aeb7f7d1cb07a681cf65d578b7d252664d68f49edac55ec1db18ace1" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.471042 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q9kvc" event={"ID":"cf1c9b2b-a06d-40c4-8471-246a2041fa96","Type":"ContainerDied","Data":"778f7c97e481a70d8ed7522a36ba214615f1a49c8194f3a7734835ba0f10432a"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.471084 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778f7c97e481a70d8ed7522a36ba214615f1a49c8194f3a7734835ba0f10432a" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.471140 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q9kvc" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.476989 4869 generic.go:334] "Generic (PLEG): container finished" podID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerID="c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" exitCode=0 Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.477615 4869 generic.go:334] "Generic (PLEG): container finished" podID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerID="c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" exitCode=0 Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.477686 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerDied","Data":"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.477715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerDied","Data":"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.477728 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc37f611-b36d-45d4-9434-9b8ca3e83efb","Type":"ContainerDied","Data":"319ee49598248f702f1c739930beb87284ed9768ddb168c92116f380986879e2"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.477744 4869 scope.go:117] "RemoveContainer" containerID="ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.478033 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.480070 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.484416 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.486404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6e3c-account-create-update-4bhj5" event={"ID":"a0d99944-ef47-4a37-b27b-b68ee2aafa99","Type":"ContainerDied","Data":"06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.486434 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06cf5c4cbfcce4cdd918f96c32cf66717c314da0b16c23a911e0a52f5f23f56e" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.489062 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.489118 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-97dc-account-create-update-sslwl" event={"ID":"0088081d-47a5-4616-9c0a-36934cb45b2a","Type":"ContainerDied","Data":"e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.489160 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b4dfe46ad1c88de26f29e4ef624c0f898f7f3724f6f5965e07e837c1bfe45a" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.492396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xp2nl" event={"ID":"3f95a5bc-df02-4c08-bd3d-fbc4faa9db21","Type":"ContainerDied","Data":"69c0143ae825cd0b8e8dbfcf2afa3022fddc3ee4e69c0b69f589639c3eaac6bd"} Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.492589 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69c0143ae825cd0b8e8dbfcf2afa3022fddc3ee4e69c0b69f589639c3eaac6bd" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.492675 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xp2nl" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.499795 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.499823 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.499835 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwmjr\" (UniqueName: \"kubernetes.io/projected/bc37f611-b36d-45d4-9434-9b8ca3e83efb-kube-api-access-nwmjr\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.499846 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.519016 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data" (OuterVolumeSpecName: "config-data") pod "bc37f611-b36d-45d4-9434-9b8ca3e83efb" (UID: "bc37f611-b36d-45d4-9434-9b8ca3e83efb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.546266 4869 scope.go:117] "RemoveContainer" containerID="ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.570643 4869 scope.go:117] "RemoveContainer" containerID="c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.594471 4869 scope.go:117] "RemoveContainer" containerID="c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.601202 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc37f611-b36d-45d4-9434-9b8ca3e83efb-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.618610 4869 scope.go:117] "RemoveContainer" containerID="ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.619217 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027\": container with ID starting with ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027 not found: ID does not exist" containerID="ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.619264 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027"} err="failed to get container status \"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027\": rpc error: code = NotFound desc = could not find container \"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027\": container with ID starting with ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027 not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.619285 4869 scope.go:117] "RemoveContainer" containerID="ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.620172 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c\": container with ID starting with ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c not found: ID does not exist" containerID="ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.620247 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c"} err="failed to get container status \"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c\": rpc error: code = NotFound desc = could not find container \"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c\": container with ID starting with ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.620263 4869 scope.go:117] "RemoveContainer" containerID="c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.620713 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8\": container with ID starting with c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8 not found: ID does not exist" containerID="c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.620750 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8"} err="failed to get container status \"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8\": rpc error: code = NotFound desc = could not find container \"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8\": container with ID starting with c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8 not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.620762 4869 scope.go:117] "RemoveContainer" containerID="c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.620985 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc\": container with ID starting with c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc not found: ID does not exist" containerID="c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621002 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc"} err="failed to get container status \"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc\": rpc error: code = NotFound desc = could not find container \"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc\": container with ID starting with c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621014 4869 scope.go:117] "RemoveContainer" containerID="ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621241 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027"} err="failed to get container status \"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027\": rpc error: code = NotFound desc = could not find container \"ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027\": container with ID starting with ae009516ecbbdb314fb43f62169ea632fdfc67e3d15666b027cf224d01cfe027 not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621259 4869 scope.go:117] "RemoveContainer" containerID="ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621507 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c"} err="failed to get container status \"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c\": rpc error: code = NotFound desc = could not find container \"ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c\": container with ID starting with ec203f981983ae47bbc6d9664600ece41d3d5e7dc7e27168aebe979618da4c4c not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621537 4869 scope.go:117] "RemoveContainer" containerID="c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621823 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8"} err="failed to get container status \"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8\": rpc error: code = NotFound desc = could not find container \"c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8\": container with ID starting with c2e4c22e4953d8f8524b30d9cbaf05317c09f7e037970f408ce6e275c86564f8 not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.621856 4869 scope.go:117] "RemoveContainer" containerID="c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.622223 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc"} err="failed to get container status \"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc\": rpc error: code = NotFound desc = could not find container \"c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc\": container with ID starting with c1aaa1af5ee73a9994f7da79a8c1783357484f8a354b82cb30a212ca5ab86fcc not found: ID does not exist" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.799366 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.806961 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825054 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825449 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a92a488-1e14-45e0-9dc7-c09605d26de5" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825472 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a92a488-1e14-45e0-9dc7-c09605d26de5" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825494 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93537664-809a-4f60-add8-bccd7a8b08a2" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825502 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="93537664-809a-4f60-add8-bccd7a8b08a2" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825526 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-central-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825534 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-central-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825550 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf1c9b2b-a06d-40c4-8471-246a2041fa96" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825557 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf1c9b2b-a06d-40c4-8471-246a2041fa96" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825571 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-notification-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825578 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-notification-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825590 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="proxy-httpd" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825598 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="proxy-httpd" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825617 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0088081d-47a5-4616-9c0a-36934cb45b2a" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825624 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0088081d-47a5-4616-9c0a-36934cb45b2a" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825637 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d99944-ef47-4a37-b27b-b68ee2aafa99" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825645 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d99944-ef47-4a37-b27b-b68ee2aafa99" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825658 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="sg-core" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825684 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="sg-core" Jan 06 14:19:27 crc kubenswrapper[4869]: E0106 14:19:27.825697 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825705 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825922 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="sg-core" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825945 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d99944-ef47-4a37-b27b-b68ee2aafa99" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825956 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a92a488-1e14-45e0-9dc7-c09605d26de5" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825971 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-central-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825984 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="93537664-809a-4f60-add8-bccd7a8b08a2" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.825994 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0088081d-47a5-4616-9c0a-36934cb45b2a" containerName="mariadb-account-create-update" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.826004 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="proxy-httpd" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.826024 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.826034 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" containerName="ceilometer-notification-agent" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.826046 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf1c9b2b-a06d-40c4-8471-246a2041fa96" containerName="mariadb-database-create" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.830731 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.833850 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.834100 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.848783 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.905847 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.905967 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz75v\" (UniqueName: \"kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.905992 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.906038 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.906084 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.906116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:27 crc kubenswrapper[4869]: I0106 14:19:27.906161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007189 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007270 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz75v\" (UniqueName: \"kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007292 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007319 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007345 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007377 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.007415 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.008468 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.009278 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.018968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.019195 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.021271 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.022484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.024742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz75v\" (UniqueName: \"kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v\") pod \"ceilometer-0\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.151951 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.537189 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:28 crc kubenswrapper[4869]: I0106 14:19:28.672252 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.446437 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mlp2w"] Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.448449 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.451041 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.451184 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zvfp8" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.451294 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.470610 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mlp2w"] Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.529530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerStarted","Data":"efd07e2e51644f6cea5cee4b342049f2fdb6fc05eb608d9eb7691daa98a7efcc"} Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.532838 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.532898 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.533078 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtql\" (UniqueName: \"kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.533125 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.635759 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.635857 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.636976 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghtql\" (UniqueName: \"kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.637032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.640940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.641154 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.642124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.654968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghtql\" (UniqueName: \"kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql\") pod \"nova-cell0-conductor-db-sync-mlp2w\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.722869 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc37f611-b36d-45d4-9434-9b8ca3e83efb" path="/var/lib/kubelet/pods/bc37f611-b36d-45d4-9434-9b8ca3e83efb/volumes" Jan 06 14:19:29 crc kubenswrapper[4869]: I0106 14:19:29.769407 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:30 crc kubenswrapper[4869]: I0106 14:19:30.316318 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mlp2w"] Jan 06 14:19:30 crc kubenswrapper[4869]: W0106 14:19:30.322439 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbae6e299_373f_4381_ac37_5aadba9f902f.slice/crio-bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c WatchSource:0}: Error finding container bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c: Status 404 returned error can't find the container with id bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c Jan 06 14:19:30 crc kubenswrapper[4869]: I0106 14:19:30.541398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" event={"ID":"bae6e299-373f-4381-ac37-5aadba9f902f","Type":"ContainerStarted","Data":"bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c"} Jan 06 14:19:31 crc kubenswrapper[4869]: I0106 14:19:31.550862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerStarted","Data":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} Jan 06 14:19:32 crc kubenswrapper[4869]: I0106 14:19:32.564145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerStarted","Data":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} Jan 06 14:19:33 crc kubenswrapper[4869]: I0106 14:19:33.573941 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerStarted","Data":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.625697 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" event={"ID":"bae6e299-373f-4381-ac37-5aadba9f902f","Type":"ContainerStarted","Data":"78ac26826386d5faa21a28fe2dbe37fed31231000348d6e855e73758afeb49aa"} Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.634612 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerStarted","Data":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.634856 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-central-agent" containerID="cri-o://42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" gracePeriod=30 Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.635984 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.636063 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="proxy-httpd" containerID="cri-o://0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" gracePeriod=30 Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.636151 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="sg-core" containerID="cri-o://5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" gracePeriod=30 Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.636259 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-notification-agent" containerID="cri-o://7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" gracePeriod=30 Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.661054 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" podStartSLOduration=1.9588080909999999 podStartE2EDuration="9.661026386s" podCreationTimestamp="2026-01-06 14:19:29 +0000 UTC" firstStartedPulling="2026-01-06 14:19:30.324411203 +0000 UTC m=+1188.864098867" lastFinishedPulling="2026-01-06 14:19:38.026629498 +0000 UTC m=+1196.566317162" observedRunningTime="2026-01-06 14:19:38.651849738 +0000 UTC m=+1197.191537412" watchObservedRunningTime="2026-01-06 14:19:38.661026386 +0000 UTC m=+1197.200714050" Jan 06 14:19:38 crc kubenswrapper[4869]: I0106 14:19:38.683222 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.338824042 podStartE2EDuration="11.683200116s" podCreationTimestamp="2026-01-06 14:19:27 +0000 UTC" firstStartedPulling="2026-01-06 14:19:28.681827263 +0000 UTC m=+1187.221514927" lastFinishedPulling="2026-01-06 14:19:38.026203347 +0000 UTC m=+1196.565891001" observedRunningTime="2026-01-06 14:19:38.68216473 +0000 UTC m=+1197.221852414" watchObservedRunningTime="2026-01-06 14:19:38.683200116 +0000 UTC m=+1197.222887780" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.559800 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641084 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641227 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641336 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641499 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz75v\" (UniqueName: \"kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641573 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641612 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.641687 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle\") pod \"cdd90245-1be0-49ff-be9b-87afcee5ec50\" (UID: \"cdd90245-1be0-49ff-be9b-87afcee5ec50\") " Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.642956 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.645419 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.650471 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts" (OuterVolumeSpecName: "scripts") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651466 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v" (OuterVolumeSpecName: "kube-api-access-dz75v") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "kube-api-access-dz75v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651908 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" exitCode=0 Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651946 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" exitCode=2 Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651956 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" exitCode=0 Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651965 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" exitCode=0 Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.651971 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerDied","Data":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerDied","Data":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerDied","Data":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652135 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerDied","Data":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd90245-1be0-49ff-be9b-87afcee5ec50","Type":"ContainerDied","Data":"efd07e2e51644f6cea5cee4b342049f2fdb6fc05eb608d9eb7691daa98a7efcc"} Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652172 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.652181 4869 scope.go:117] "RemoveContainer" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.682445 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.715732 4869 scope.go:117] "RemoveContainer" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.719064 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.743558 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz75v\" (UniqueName: \"kubernetes.io/projected/cdd90245-1be0-49ff-be9b-87afcee5ec50-kube-api-access-dz75v\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.744251 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.744279 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.744292 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd90245-1be0-49ff-be9b-87afcee5ec50-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.744301 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.744312 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.746500 4869 scope.go:117] "RemoveContainer" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.755808 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data" (OuterVolumeSpecName: "config-data") pod "cdd90245-1be0-49ff-be9b-87afcee5ec50" (UID: "cdd90245-1be0-49ff-be9b-87afcee5ec50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.774888 4869 scope.go:117] "RemoveContainer" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.801781 4869 scope.go:117] "RemoveContainer" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: E0106 14:19:39.802393 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": container with ID starting with 0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92 not found: ID does not exist" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.802429 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} err="failed to get container status \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": rpc error: code = NotFound desc = could not find container \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": container with ID starting with 0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.802456 4869 scope.go:117] "RemoveContainer" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: E0106 14:19:39.802912 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": container with ID starting with 5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338 not found: ID does not exist" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.802972 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} err="failed to get container status \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": rpc error: code = NotFound desc = could not find container \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": container with ID starting with 5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.802998 4869 scope.go:117] "RemoveContainer" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: E0106 14:19:39.803429 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": container with ID starting with 7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7 not found: ID does not exist" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.803456 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} err="failed to get container status \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": rpc error: code = NotFound desc = could not find container \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": container with ID starting with 7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.803479 4869 scope.go:117] "RemoveContainer" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: E0106 14:19:39.803886 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": container with ID starting with 42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6 not found: ID does not exist" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.803933 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} err="failed to get container status \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": rpc error: code = NotFound desc = could not find container \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": container with ID starting with 42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.803952 4869 scope.go:117] "RemoveContainer" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.804431 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} err="failed to get container status \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": rpc error: code = NotFound desc = could not find container \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": container with ID starting with 0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.804507 4869 scope.go:117] "RemoveContainer" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.804850 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} err="failed to get container status \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": rpc error: code = NotFound desc = could not find container \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": container with ID starting with 5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.804874 4869 scope.go:117] "RemoveContainer" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805134 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} err="failed to get container status \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": rpc error: code = NotFound desc = could not find container \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": container with ID starting with 7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805160 4869 scope.go:117] "RemoveContainer" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805431 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} err="failed to get container status \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": rpc error: code = NotFound desc = could not find container \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": container with ID starting with 42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805460 4869 scope.go:117] "RemoveContainer" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805737 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} err="failed to get container status \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": rpc error: code = NotFound desc = could not find container \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": container with ID starting with 0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.805776 4869 scope.go:117] "RemoveContainer" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806112 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} err="failed to get container status \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": rpc error: code = NotFound desc = could not find container \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": container with ID starting with 5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806146 4869 scope.go:117] "RemoveContainer" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806455 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} err="failed to get container status \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": rpc error: code = NotFound desc = could not find container \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": container with ID starting with 7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806476 4869 scope.go:117] "RemoveContainer" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806801 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} err="failed to get container status \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": rpc error: code = NotFound desc = could not find container \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": container with ID starting with 42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.806836 4869 scope.go:117] "RemoveContainer" containerID="0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.807094 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92"} err="failed to get container status \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": rpc error: code = NotFound desc = could not find container \"0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92\": container with ID starting with 0944df420f4e11528eafb0c26afee29a54952648c5cb4399c664187c9d768a92 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.807124 4869 scope.go:117] "RemoveContainer" containerID="5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.807433 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338"} err="failed to get container status \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": rpc error: code = NotFound desc = could not find container \"5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338\": container with ID starting with 5d2d9ea934cca5d18fe498713fc1682e22c4242069cfcb01e95af490bd769338 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.807461 4869 scope.go:117] "RemoveContainer" containerID="7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.809387 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7"} err="failed to get container status \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": rpc error: code = NotFound desc = could not find container \"7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7\": container with ID starting with 7ee6db3923120511bcffc7d0f8a6124a12a22636001b6701d3e7aa87c6106ba7 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.809410 4869 scope.go:117] "RemoveContainer" containerID="42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.809842 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6"} err="failed to get container status \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": rpc error: code = NotFound desc = could not find container \"42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6\": container with ID starting with 42303797b60ebf68ddd0f15ef22b6d7abe2bdfe53bb09ff026480a1d5c88c7b6 not found: ID does not exist" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.846209 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd90245-1be0-49ff-be9b-87afcee5ec50-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.985062 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:39 crc kubenswrapper[4869]: I0106 14:19:39.993457 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.012064 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:40 crc kubenswrapper[4869]: E0106 14:19:40.012926 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="sg-core" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.012949 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="sg-core" Jan 06 14:19:40 crc kubenswrapper[4869]: E0106 14:19:40.012977 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="proxy-httpd" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.012984 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="proxy-httpd" Jan 06 14:19:40 crc kubenswrapper[4869]: E0106 14:19:40.013007 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-central-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013014 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-central-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: E0106 14:19:40.013028 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-notification-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-notification-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013490 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="sg-core" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013513 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="proxy-httpd" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013537 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-central-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.013550 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" containerName="ceilometer-notification-agent" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.016862 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.030257 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.030298 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.050760 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051613 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051729 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6dp\" (UniqueName: \"kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051777 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.051873 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153623 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153729 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p6dp\" (UniqueName: \"kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153807 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153872 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153902 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153921 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.153948 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.154855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.154934 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.157871 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.158003 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.158477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.159710 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.179284 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p6dp\" (UniqueName: \"kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp\") pod \"ceilometer-0\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.394853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:40 crc kubenswrapper[4869]: I0106 14:19:40.868525 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:41 crc kubenswrapper[4869]: I0106 14:19:41.167744 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:41 crc kubenswrapper[4869]: I0106 14:19:41.686334 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerStarted","Data":"70ef82f2370c46b705eef9d991393d3cc1767c3b827dc2966a9524c3ff40961f"} Jan 06 14:19:41 crc kubenswrapper[4869]: I0106 14:19:41.727476 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd90245-1be0-49ff-be9b-87afcee5ec50" path="/var/lib/kubelet/pods/cdd90245-1be0-49ff-be9b-87afcee5ec50/volumes" Jan 06 14:19:42 crc kubenswrapper[4869]: I0106 14:19:42.694504 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerStarted","Data":"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f"} Jan 06 14:19:42 crc kubenswrapper[4869]: I0106 14:19:42.694849 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerStarted","Data":"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002"} Jan 06 14:19:43 crc kubenswrapper[4869]: I0106 14:19:43.715267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerStarted","Data":"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53"} Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.719604 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerStarted","Data":"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d"} Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.720010 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-central-agent" containerID="cri-o://9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002" gracePeriod=30 Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.720366 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.720342 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="sg-core" containerID="cri-o://e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53" gracePeriod=30 Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.720539 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="proxy-httpd" containerID="cri-o://fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d" gracePeriod=30 Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.720626 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-notification-agent" containerID="cri-o://251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f" gracePeriod=30 Jan 06 14:19:44 crc kubenswrapper[4869]: I0106 14:19:44.742209 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.263641721 podStartE2EDuration="5.742187496s" podCreationTimestamp="2026-01-06 14:19:39 +0000 UTC" firstStartedPulling="2026-01-06 14:19:40.872927718 +0000 UTC m=+1199.412615382" lastFinishedPulling="2026-01-06 14:19:44.351473493 +0000 UTC m=+1202.891161157" observedRunningTime="2026-01-06 14:19:44.742030882 +0000 UTC m=+1203.281718556" watchObservedRunningTime="2026-01-06 14:19:44.742187496 +0000 UTC m=+1203.281875170" Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729352 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerID="fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d" exitCode=0 Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729692 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerID="e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53" exitCode=2 Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729701 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerID="251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f" exitCode=0 Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerDied","Data":"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d"} Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerDied","Data":"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53"} Jan 06 14:19:45 crc kubenswrapper[4869]: I0106 14:19:45.729777 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerDied","Data":"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f"} Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.187384 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261280 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261323 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261500 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261575 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p6dp\" (UniqueName: \"kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261602 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml\") pod \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\" (UID: \"d1c08c94-9abe-4942-ac68-3973ef1e62a2\") " Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.261989 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.262428 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.267480 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp" (OuterVolumeSpecName: "kube-api-access-2p6dp") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "kube-api-access-2p6dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.267799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts" (OuterVolumeSpecName: "scripts") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.295328 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.343687 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.354019 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data" (OuterVolumeSpecName: "config-data") pod "d1c08c94-9abe-4942-ac68-3973ef1e62a2" (UID: "d1c08c94-9abe-4942-ac68-3973ef1e62a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364155 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364195 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364207 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364215 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364224 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p6dp\" (UniqueName: \"kubernetes.io/projected/d1c08c94-9abe-4942-ac68-3973ef1e62a2-kube-api-access-2p6dp\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364234 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1c08c94-9abe-4942-ac68-3973ef1e62a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.364241 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1c08c94-9abe-4942-ac68-3973ef1e62a2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.739571 4869 generic.go:334] "Generic (PLEG): container finished" podID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerID="9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002" exitCode=0 Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.739649 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.739646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerDied","Data":"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002"} Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.739832 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1c08c94-9abe-4942-ac68-3973ef1e62a2","Type":"ContainerDied","Data":"70ef82f2370c46b705eef9d991393d3cc1767c3b827dc2966a9524c3ff40961f"} Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.739855 4869 scope.go:117] "RemoveContainer" containerID="fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.779357 4869 scope.go:117] "RemoveContainer" containerID="e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.784433 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.800852 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.813000 4869 scope.go:117] "RemoveContainer" containerID="251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.821595 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.821969 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="sg-core" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.821981 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="sg-core" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.821995 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-notification-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822001 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-notification-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.822009 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="proxy-httpd" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822015 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="proxy-httpd" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.822034 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-central-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822040 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-central-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822197 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="sg-core" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822218 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="proxy-httpd" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822229 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-notification-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.822235 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" containerName="ceilometer-central-agent" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.823931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.826553 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.828148 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.831900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.853587 4869 scope.go:117] "RemoveContainer" containerID="9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.875786 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.875909 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczht\" (UniqueName: \"kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.875972 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.876008 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.876041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.876161 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.876211 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.887494 4869 scope.go:117] "RemoveContainer" containerID="fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.887861 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d\": container with ID starting with fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d not found: ID does not exist" containerID="fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.887890 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d"} err="failed to get container status \"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d\": rpc error: code = NotFound desc = could not find container \"fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d\": container with ID starting with fdc98efe056067a685b2e146633f54dce2e6a716d82813f60ba030aaca2eef9d not found: ID does not exist" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.887911 4869 scope.go:117] "RemoveContainer" containerID="e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.888264 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53\": container with ID starting with e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53 not found: ID does not exist" containerID="e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.888287 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53"} err="failed to get container status \"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53\": rpc error: code = NotFound desc = could not find container \"e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53\": container with ID starting with e2dfc664123d8e2c489f62889d95a2a994b98bdf0f263bcb486f4456b075aa53 not found: ID does not exist" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.888304 4869 scope.go:117] "RemoveContainer" containerID="251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.888508 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f\": container with ID starting with 251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f not found: ID does not exist" containerID="251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.888580 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f"} err="failed to get container status \"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f\": rpc error: code = NotFound desc = could not find container \"251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f\": container with ID starting with 251571492d283eeabd6c34b405224d15f27d4060b51f6fd0f0b1d8344e68223f not found: ID does not exist" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.888644 4869 scope.go:117] "RemoveContainer" containerID="9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002" Jan 06 14:19:46 crc kubenswrapper[4869]: E0106 14:19:46.889001 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002\": container with ID starting with 9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002 not found: ID does not exist" containerID="9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.889033 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002"} err="failed to get container status \"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002\": rpc error: code = NotFound desc = could not find container \"9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002\": container with ID starting with 9ef9ed3462839ef4164b8cbc5b965096e079e7494b2e46ea016ba954d3466002 not found: ID does not exist" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978007 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978090 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978131 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczht\" (UniqueName: \"kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978148 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978168 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.978242 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.979117 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.979322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.983356 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.983554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.984742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.994650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:46 crc kubenswrapper[4869]: I0106 14:19:46.996937 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczht\" (UniqueName: \"kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht\") pod \"ceilometer-0\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " pod="openstack/ceilometer-0" Jan 06 14:19:47 crc kubenswrapper[4869]: I0106 14:19:47.183326 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:19:47 crc kubenswrapper[4869]: I0106 14:19:47.647574 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:19:47 crc kubenswrapper[4869]: W0106 14:19:47.652297 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f526de6_6318_47b6_842b_761a6161f704.slice/crio-f8d9082a615624c5847cf22d425739d4377cae540fa22e20addd5152300410a2 WatchSource:0}: Error finding container f8d9082a615624c5847cf22d425739d4377cae540fa22e20addd5152300410a2: Status 404 returned error can't find the container with id f8d9082a615624c5847cf22d425739d4377cae540fa22e20addd5152300410a2 Jan 06 14:19:47 crc kubenswrapper[4869]: I0106 14:19:47.714886 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1c08c94-9abe-4942-ac68-3973ef1e62a2" path="/var/lib/kubelet/pods/d1c08c94-9abe-4942-ac68-3973ef1e62a2/volumes" Jan 06 14:19:47 crc kubenswrapper[4869]: I0106 14:19:47.750122 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerStarted","Data":"f8d9082a615624c5847cf22d425739d4377cae540fa22e20addd5152300410a2"} Jan 06 14:19:48 crc kubenswrapper[4869]: I0106 14:19:48.764463 4869 generic.go:334] "Generic (PLEG): container finished" podID="bae6e299-373f-4381-ac37-5aadba9f902f" containerID="78ac26826386d5faa21a28fe2dbe37fed31231000348d6e855e73758afeb49aa" exitCode=0 Jan 06 14:19:48 crc kubenswrapper[4869]: I0106 14:19:48.764699 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" event={"ID":"bae6e299-373f-4381-ac37-5aadba9f902f","Type":"ContainerDied","Data":"78ac26826386d5faa21a28fe2dbe37fed31231000348d6e855e73758afeb49aa"} Jan 06 14:19:48 crc kubenswrapper[4869]: I0106 14:19:48.768618 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerStarted","Data":"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483"} Jan 06 14:19:49 crc kubenswrapper[4869]: I0106 14:19:49.796708 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerStarted","Data":"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea"} Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.115483 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.233488 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data\") pod \"bae6e299-373f-4381-ac37-5aadba9f902f\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.233601 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle\") pod \"bae6e299-373f-4381-ac37-5aadba9f902f\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.233648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts\") pod \"bae6e299-373f-4381-ac37-5aadba9f902f\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.233741 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghtql\" (UniqueName: \"kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql\") pod \"bae6e299-373f-4381-ac37-5aadba9f902f\" (UID: \"bae6e299-373f-4381-ac37-5aadba9f902f\") " Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.240523 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql" (OuterVolumeSpecName: "kube-api-access-ghtql") pod "bae6e299-373f-4381-ac37-5aadba9f902f" (UID: "bae6e299-373f-4381-ac37-5aadba9f902f"). InnerVolumeSpecName "kube-api-access-ghtql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.251006 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts" (OuterVolumeSpecName: "scripts") pod "bae6e299-373f-4381-ac37-5aadba9f902f" (UID: "bae6e299-373f-4381-ac37-5aadba9f902f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.259197 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data" (OuterVolumeSpecName: "config-data") pod "bae6e299-373f-4381-ac37-5aadba9f902f" (UID: "bae6e299-373f-4381-ac37-5aadba9f902f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.260091 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bae6e299-373f-4381-ac37-5aadba9f902f" (UID: "bae6e299-373f-4381-ac37-5aadba9f902f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.336247 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.336288 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.336303 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghtql\" (UniqueName: \"kubernetes.io/projected/bae6e299-373f-4381-ac37-5aadba9f902f-kube-api-access-ghtql\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.336318 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bae6e299-373f-4381-ac37-5aadba9f902f-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.813269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" event={"ID":"bae6e299-373f-4381-ac37-5aadba9f902f","Type":"ContainerDied","Data":"bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c"} Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.813587 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd53a06df8296b2023dc3a34e004ea9c3966a467dab6874616fa8acf0540942c" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.813654 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mlp2w" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.821875 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerStarted","Data":"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610"} Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.885975 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 06 14:19:50 crc kubenswrapper[4869]: E0106 14:19:50.886367 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae6e299-373f-4381-ac37-5aadba9f902f" containerName="nova-cell0-conductor-db-sync" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.886387 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae6e299-373f-4381-ac37-5aadba9f902f" containerName="nova-cell0-conductor-db-sync" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.886585 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae6e299-373f-4381-ac37-5aadba9f902f" containerName="nova-cell0-conductor-db-sync" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.889321 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.892817 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.893207 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-zvfp8" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.905516 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.944427 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpq5h\" (UniqueName: \"kubernetes.io/projected/48f633eb-c984-48c6-91ec-4b4918036e39-kube-api-access-mpq5h\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.944595 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:50 crc kubenswrapper[4869]: I0106 14:19:50.944647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.046818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpq5h\" (UniqueName: \"kubernetes.io/projected/48f633eb-c984-48c6-91ec-4b4918036e39-kube-api-access-mpq5h\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.046900 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.046931 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.052441 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.060940 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48f633eb-c984-48c6-91ec-4b4918036e39-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.066554 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpq5h\" (UniqueName: \"kubernetes.io/projected/48f633eb-c984-48c6-91ec-4b4918036e39-kube-api-access-mpq5h\") pod \"nova-cell0-conductor-0\" (UID: \"48f633eb-c984-48c6-91ec-4b4918036e39\") " pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.221844 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.674338 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 06 14:19:51 crc kubenswrapper[4869]: W0106 14:19:51.684626 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48f633eb_c984_48c6_91ec_4b4918036e39.slice/crio-9234ec509e763226830c0c1b8a5fa6b91fcdd68d41b8a497f057abb00dee0190 WatchSource:0}: Error finding container 9234ec509e763226830c0c1b8a5fa6b91fcdd68d41b8a497f057abb00dee0190: Status 404 returned error can't find the container with id 9234ec509e763226830c0c1b8a5fa6b91fcdd68d41b8a497f057abb00dee0190 Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.849374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerStarted","Data":"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc"} Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.849712 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.851150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"48f633eb-c984-48c6-91ec-4b4918036e39","Type":"ContainerStarted","Data":"9234ec509e763226830c0c1b8a5fa6b91fcdd68d41b8a497f057abb00dee0190"} Jan 06 14:19:51 crc kubenswrapper[4869]: I0106 14:19:51.875381 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1736515929999998 podStartE2EDuration="5.875361035s" podCreationTimestamp="2026-01-06 14:19:46 +0000 UTC" firstStartedPulling="2026-01-06 14:19:47.654243888 +0000 UTC m=+1206.193931552" lastFinishedPulling="2026-01-06 14:19:51.35595331 +0000 UTC m=+1209.895640994" observedRunningTime="2026-01-06 14:19:51.867200222 +0000 UTC m=+1210.406887886" watchObservedRunningTime="2026-01-06 14:19:51.875361035 +0000 UTC m=+1210.415048689" Jan 06 14:19:52 crc kubenswrapper[4869]: I0106 14:19:52.861007 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"48f633eb-c984-48c6-91ec-4b4918036e39","Type":"ContainerStarted","Data":"c4bf202fea8375863398b9263c4ae842f3ce0590a388c56fb1982277af64df32"} Jan 06 14:19:52 crc kubenswrapper[4869]: I0106 14:19:52.861377 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:52 crc kubenswrapper[4869]: I0106 14:19:52.874956 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.874937121 podStartE2EDuration="2.874937121s" podCreationTimestamp="2026-01-06 14:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:52.873174738 +0000 UTC m=+1211.412862402" watchObservedRunningTime="2026-01-06 14:19:52.874937121 +0000 UTC m=+1211.414624785" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.248236 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.710332 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dh8p2"] Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.711690 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.713754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.718244 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.725285 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dh8p2"] Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.853465 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.853684 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.853770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.853869 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfsx\" (UniqueName: \"kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.859384 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.860846 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.863772 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.873619 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.958911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lz8g\" (UniqueName: \"kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.958989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.959064 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfsx\" (UniqueName: \"kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.959116 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.959179 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.959202 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.959264 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.970752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.973330 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.978454 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.979896 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.982752 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.992109 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 06 14:19:56 crc kubenswrapper[4869]: I0106 14:19:56.992873 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfsx\" (UniqueName: \"kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx\") pod \"nova-cell0-cell-mapping-dh8p2\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.009314 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.040425 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.047112 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.048586 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.052285 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061803 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74xb\" (UniqueName: \"kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061859 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lz8g\" (UniqueName: \"kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061885 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061957 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f4fx\" (UniqueName: \"kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061975 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.061997 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.062018 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.062044 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.062057 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.062079 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.066634 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.075171 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.112467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lz8g\" (UniqueName: \"kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g\") pod \"nova-cell1-novncproxy-0\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.116517 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.136124 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.137403 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.140256 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.156404 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163612 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f4fx\" (UniqueName: \"kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163634 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163695 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163709 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74xb\" (UniqueName: \"kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.163790 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.166473 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.167863 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.202861 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.204555 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.206450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f4fx\" (UniqueName: \"kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.207224 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.207385 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data\") pod \"nova-api-0\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.211449 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74xb\" (UniqueName: \"kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb\") pod \"nova-metadata-0\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.229826 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.230055 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.231847 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.254273 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.268846 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.268904 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.269003 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjs26\" (UniqueName: \"kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.274592 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.290589 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.370908 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dh5v\" (UniqueName: \"kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.370966 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371045 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371764 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371852 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371936 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.371995 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjs26\" (UniqueName: \"kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.382462 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.382724 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.400072 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjs26\" (UniqueName: \"kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26\") pod \"nova-scheduler-0\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.473368 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.473449 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.473494 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.473543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dh5v\" (UniqueName: \"kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.473570 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.474814 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.475174 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.475461 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.475700 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.494648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dh5v\" (UniqueName: \"kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v\") pod \"dnsmasq-dns-566b5b7845-2p6mk\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.610138 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.629401 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.683142 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dh8p2"] Jan 06 14:19:57 crc kubenswrapper[4869]: W0106 14:19:57.694916 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9808d05c_1692_4b1f_b1be_5060fc290609.slice/crio-2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6 WatchSource:0}: Error finding container 2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6: Status 404 returned error can't find the container with id 2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6 Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.813000 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.838756 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rqkfr"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.841735 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.846963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.847125 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.857695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rqkfr"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.893701 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.896515 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm8vx\" (UniqueName: \"kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.896607 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.896646 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.896730 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: W0106 14:19:57.897144 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ede238b_a65d_42e0_af52_4462756ca58a.slice/crio-20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b WatchSource:0}: Error finding container 20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b: Status 404 returned error can't find the container with id 20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.901685 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.930788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerStarted","Data":"e9c377c10399af5588fd0d84361dc036ad6ab5b3e3e46f996222b68ee356325b"} Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.931643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45dbc745-6a90-4773-8dcf-31d57de4f384","Type":"ContainerStarted","Data":"bb17f43203c3760b64fe04a2b7049d9dd276b16659e413f9335563ea2a2d808a"} Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.934288 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dh8p2" event={"ID":"9808d05c-1692-4b1f-b1be-5060fc290609","Type":"ContainerStarted","Data":"2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6"} Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.939609 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerStarted","Data":"20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b"} Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.998715 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.998794 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.998842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm8vx\" (UniqueName: \"kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:57 crc kubenswrapper[4869]: I0106 14:19:57.998916 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.004191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.009065 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.012579 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.021271 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm8vx\" (UniqueName: \"kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx\") pod \"nova-cell1-conductor-db-sync-rqkfr\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.144259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:19:58 crc kubenswrapper[4869]: W0106 14:19:58.154846 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbd4a1a6_68dc_473b_875f_f55c1fbac887.slice/crio-798b433eb105e6a694887258683fec1a16f25d639e0d18e554e532d5752a5189 WatchSource:0}: Error finding container 798b433eb105e6a694887258683fec1a16f25d639e0d18e554e532d5752a5189: Status 404 returned error can't find the container with id 798b433eb105e6a694887258683fec1a16f25d639e0d18e554e532d5752a5189 Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.169158 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.268474 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:19:58 crc kubenswrapper[4869]: W0106 14:19:58.275396 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10757c5c_36ff_41eb_bba7_d3ad5f372da9.slice/crio-7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3 WatchSource:0}: Error finding container 7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3: Status 404 returned error can't find the container with id 7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3 Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.680896 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rqkfr"] Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.949892 4869 generic.go:334] "Generic (PLEG): container finished" podID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerID="59b5f8adf014bbe42fdc56da428808cb285d0b13f36ea41b8f58d6cb7a329766" exitCode=0 Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.950150 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" event={"ID":"fbd4a1a6-68dc-473b-875f-f55c1fbac887","Type":"ContainerDied","Data":"59b5f8adf014bbe42fdc56da428808cb285d0b13f36ea41b8f58d6cb7a329766"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.950176 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" event={"ID":"fbd4a1a6-68dc-473b-875f-f55c1fbac887","Type":"ContainerStarted","Data":"798b433eb105e6a694887258683fec1a16f25d639e0d18e554e532d5752a5189"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.954911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" event={"ID":"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f","Type":"ContainerStarted","Data":"6612a01c1789b89004e1731656d73c966545dd278df7d0351f8fc39fd576fc62"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.954977 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" event={"ID":"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f","Type":"ContainerStarted","Data":"1fd9f0889012c2165e610a81b6a4f2a5da23553c96622124efa202c3308c2ae1"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.958212 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10757c5c-36ff-41eb-bba7-d3ad5f372da9","Type":"ContainerStarted","Data":"7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.961219 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dh8p2" event={"ID":"9808d05c-1692-4b1f-b1be-5060fc290609","Type":"ContainerStarted","Data":"5f2cb8b410e69514df37a550dddbb195cb3b96f1de61343c8dbc5ac28bc18d8a"} Jan 06 14:19:58 crc kubenswrapper[4869]: I0106 14:19:58.993264 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dh8p2" podStartSLOduration=2.993245733 podStartE2EDuration="2.993245733s" podCreationTimestamp="2026-01-06 14:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:58.989405338 +0000 UTC m=+1217.529093002" watchObservedRunningTime="2026-01-06 14:19:58.993245733 +0000 UTC m=+1217.532933397" Jan 06 14:19:59 crc kubenswrapper[4869]: I0106 14:19:59.021218 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" podStartSLOduration=2.021191767 podStartE2EDuration="2.021191767s" podCreationTimestamp="2026-01-06 14:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:59.003253832 +0000 UTC m=+1217.542941496" watchObservedRunningTime="2026-01-06 14:19:59.021191767 +0000 UTC m=+1217.560879431" Jan 06 14:19:59 crc kubenswrapper[4869]: I0106 14:19:59.970837 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" event={"ID":"fbd4a1a6-68dc-473b-875f-f55c1fbac887","Type":"ContainerStarted","Data":"afff593e83ab978b1523ae56b8844ade234551daa06e1595246b702ad7da5b83"} Jan 06 14:19:59 crc kubenswrapper[4869]: I0106 14:19:59.971328 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:19:59 crc kubenswrapper[4869]: I0106 14:19:59.990912 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" podStartSLOduration=2.990891423 podStartE2EDuration="2.990891423s" podCreationTimestamp="2026-01-06 14:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:19:59.989399246 +0000 UTC m=+1218.529086910" watchObservedRunningTime="2026-01-06 14:19:59.990891423 +0000 UTC m=+1218.530579077" Jan 06 14:20:00 crc kubenswrapper[4869]: I0106 14:20:00.423520 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:00 crc kubenswrapper[4869]: I0106 14:20:00.432490 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.008001 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10757c5c-36ff-41eb-bba7-d3ad5f372da9","Type":"ContainerStarted","Data":"1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.013495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerStarted","Data":"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.013563 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerStarted","Data":"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.014051 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-log" containerID="cri-o://0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" gracePeriod=30 Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.014103 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-metadata" containerID="cri-o://cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" gracePeriod=30 Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.017991 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45dbc745-6a90-4773-8dcf-31d57de4f384","Type":"ContainerStarted","Data":"6d75bd8622f014916acb8f34fa72bba5a875363533ba5f5e4882350b4ea586c2"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.018126 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="45dbc745-6a90-4773-8dcf-31d57de4f384" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6d75bd8622f014916acb8f34fa72bba5a875363533ba5f5e4882350b4ea586c2" gracePeriod=30 Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.033973 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerStarted","Data":"fbe21376fbb30002898422b545304b8279733bb9bd2d188fbe6d9453a8dc8bb2"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.034014 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerStarted","Data":"1328b103741f8aa046f192bbeb9defccb1fb83183ab89fa218846e53e49abfea"} Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.037183 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.474049981 podStartE2EDuration="6.037160194s" podCreationTimestamp="2026-01-06 14:19:57 +0000 UTC" firstStartedPulling="2026-01-06 14:19:58.279474786 +0000 UTC m=+1216.819162450" lastFinishedPulling="2026-01-06 14:20:01.842584989 +0000 UTC m=+1220.382272663" observedRunningTime="2026-01-06 14:20:03.029386912 +0000 UTC m=+1221.569074586" watchObservedRunningTime="2026-01-06 14:20:03.037160194 +0000 UTC m=+1221.576847858" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.046904 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.047413976 podStartE2EDuration="7.046883705s" podCreationTimestamp="2026-01-06 14:19:56 +0000 UTC" firstStartedPulling="2026-01-06 14:19:57.84268375 +0000 UTC m=+1216.382371424" lastFinishedPulling="2026-01-06 14:20:01.842153489 +0000 UTC m=+1220.381841153" observedRunningTime="2026-01-06 14:20:03.045613064 +0000 UTC m=+1221.585300738" watchObservedRunningTime="2026-01-06 14:20:03.046883705 +0000 UTC m=+1221.586571369" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.076612 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.121880195 podStartE2EDuration="7.076585773s" podCreationTimestamp="2026-01-06 14:19:56 +0000 UTC" firstStartedPulling="2026-01-06 14:19:57.891652215 +0000 UTC m=+1216.431339879" lastFinishedPulling="2026-01-06 14:20:01.846357793 +0000 UTC m=+1220.386045457" observedRunningTime="2026-01-06 14:20:03.067699012 +0000 UTC m=+1221.607386676" watchObservedRunningTime="2026-01-06 14:20:03.076585773 +0000 UTC m=+1221.616273467" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.093879 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.151677194 podStartE2EDuration="7.093856401s" podCreationTimestamp="2026-01-06 14:19:56 +0000 UTC" firstStartedPulling="2026-01-06 14:19:57.900028513 +0000 UTC m=+1216.439716177" lastFinishedPulling="2026-01-06 14:20:01.84220772 +0000 UTC m=+1220.381895384" observedRunningTime="2026-01-06 14:20:03.087845052 +0000 UTC m=+1221.627532726" watchObservedRunningTime="2026-01-06 14:20:03.093856401 +0000 UTC m=+1221.633544065" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.600189 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.707561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs\") pod \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.707796 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle\") pod \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.707855 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data\") pod \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.707979 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k74xb\" (UniqueName: \"kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb\") pod \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\" (UID: \"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e\") " Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.708102 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs" (OuterVolumeSpecName: "logs") pod "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" (UID: "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.708424 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.714043 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb" (OuterVolumeSpecName: "kube-api-access-k74xb") pod "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" (UID: "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e"). InnerVolumeSpecName "kube-api-access-k74xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.740536 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" (UID: "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.750371 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data" (OuterVolumeSpecName: "config-data") pod "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" (UID: "3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.810357 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.811422 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:03 crc kubenswrapper[4869]: I0106 14:20:03.811454 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k74xb\" (UniqueName: \"kubernetes.io/projected/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e-kube-api-access-k74xb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.041906 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerID="cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" exitCode=0 Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.041945 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerID="0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" exitCode=143 Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.041955 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.042023 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerDied","Data":"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b"} Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.042050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerDied","Data":"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64"} Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.042062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e","Type":"ContainerDied","Data":"e9c377c10399af5588fd0d84361dc036ad6ab5b3e3e46f996222b68ee356325b"} Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.042077 4869 scope.go:117] "RemoveContainer" containerID="cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.073175 4869 scope.go:117] "RemoveContainer" containerID="0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.077796 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.087515 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.092269 4869 scope.go:117] "RemoveContainer" containerID="cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" Jan 06 14:20:04 crc kubenswrapper[4869]: E0106 14:20:04.098513 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b\": container with ID starting with cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b not found: ID does not exist" containerID="cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.098575 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b"} err="failed to get container status \"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b\": rpc error: code = NotFound desc = could not find container \"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b\": container with ID starting with cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b not found: ID does not exist" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.098609 4869 scope.go:117] "RemoveContainer" containerID="0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" Jan 06 14:20:04 crc kubenswrapper[4869]: E0106 14:20:04.099145 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64\": container with ID starting with 0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64 not found: ID does not exist" containerID="0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.099194 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64"} err="failed to get container status \"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64\": rpc error: code = NotFound desc = could not find container \"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64\": container with ID starting with 0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64 not found: ID does not exist" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.099225 4869 scope.go:117] "RemoveContainer" containerID="cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.099932 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b"} err="failed to get container status \"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b\": rpc error: code = NotFound desc = could not find container \"cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b\": container with ID starting with cf0bd8af70791bf58373dbb17d654a4db640ae54dd8ee579a21490a7f15ae20b not found: ID does not exist" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.099964 4869 scope.go:117] "RemoveContainer" containerID="0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.100319 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64"} err="failed to get container status \"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64\": rpc error: code = NotFound desc = could not find container \"0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64\": container with ID starting with 0db02d8ba07eb03e8ca21e4ee327e0067cfa7e9df08c04e19cebe04c04157f64 not found: ID does not exist" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.105541 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:04 crc kubenswrapper[4869]: E0106 14:20:04.106511 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-log" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.106589 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-log" Jan 06 14:20:04 crc kubenswrapper[4869]: E0106 14:20:04.106680 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-metadata" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.106733 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-metadata" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.106975 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-metadata" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.107052 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" containerName="nova-metadata-log" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.108167 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.110980 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.117162 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.118724 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.219517 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.219591 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.219809 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqzb\" (UniqueName: \"kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.219888 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.220246 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.321605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.321698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.321738 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqzb\" (UniqueName: \"kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.321760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.321812 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.322820 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.327192 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.327642 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.333978 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.342824 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqzb\" (UniqueName: \"kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb\") pod \"nova-metadata-0\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.430228 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:04 crc kubenswrapper[4869]: I0106 14:20:04.894208 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:04 crc kubenswrapper[4869]: W0106 14:20:04.911558 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e600160_27c9_4d34_a068_60a8d85ba08a.slice/crio-a06a88c575387153bb64bce8c1aedbcf4ae72adee5a6c9efc7b8445ef1aa007b WatchSource:0}: Error finding container a06a88c575387153bb64bce8c1aedbcf4ae72adee5a6c9efc7b8445ef1aa007b: Status 404 returned error can't find the container with id a06a88c575387153bb64bce8c1aedbcf4ae72adee5a6c9efc7b8445ef1aa007b Jan 06 14:20:05 crc kubenswrapper[4869]: I0106 14:20:05.052912 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerStarted","Data":"a06a88c575387153bb64bce8c1aedbcf4ae72adee5a6c9efc7b8445ef1aa007b"} Jan 06 14:20:05 crc kubenswrapper[4869]: I0106 14:20:05.718409 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e" path="/var/lib/kubelet/pods/3c4ef1cc-dcfe-4c9e-b3ee-12bfb8f4625e/volumes" Jan 06 14:20:06 crc kubenswrapper[4869]: I0106 14:20:06.065050 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerStarted","Data":"bb115ccef4c5ed8be3800b6d1e69f9641ba4947122699ffcf50f022a33db4d51"} Jan 06 14:20:06 crc kubenswrapper[4869]: I0106 14:20:06.065438 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerStarted","Data":"cb6c3f38b2b0d8bac1dbcd1c77d14745e44fb7740efd0eba42a7635672ef0772"} Jan 06 14:20:06 crc kubenswrapper[4869]: I0106 14:20:06.067477 4869 generic.go:334] "Generic (PLEG): container finished" podID="9808d05c-1692-4b1f-b1be-5060fc290609" containerID="5f2cb8b410e69514df37a550dddbb195cb3b96f1de61343c8dbc5ac28bc18d8a" exitCode=0 Jan 06 14:20:06 crc kubenswrapper[4869]: I0106 14:20:06.067531 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dh8p2" event={"ID":"9808d05c-1692-4b1f-b1be-5060fc290609","Type":"ContainerDied","Data":"5f2cb8b410e69514df37a550dddbb195cb3b96f1de61343c8dbc5ac28bc18d8a"} Jan 06 14:20:06 crc kubenswrapper[4869]: I0106 14:20:06.088538 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.088517862 podStartE2EDuration="2.088517862s" podCreationTimestamp="2026-01-06 14:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:06.083683002 +0000 UTC m=+1224.623370666" watchObservedRunningTime="2026-01-06 14:20:06.088517862 +0000 UTC m=+1224.628205526" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.078137 4869 generic.go:334] "Generic (PLEG): container finished" podID="e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" containerID="6612a01c1789b89004e1731656d73c966545dd278df7d0351f8fc39fd576fc62" exitCode=0 Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.078360 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" event={"ID":"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f","Type":"ContainerDied","Data":"6612a01c1789b89004e1731656d73c966545dd278df7d0351f8fc39fd576fc62"} Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.231125 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.275579 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.275654 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.479310 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.588431 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle\") pod \"9808d05c-1692-4b1f-b1be-5060fc290609\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.588483 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts\") pod \"9808d05c-1692-4b1f-b1be-5060fc290609\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.588593 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data\") pod \"9808d05c-1692-4b1f-b1be-5060fc290609\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.589254 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfsx\" (UniqueName: \"kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx\") pod \"9808d05c-1692-4b1f-b1be-5060fc290609\" (UID: \"9808d05c-1692-4b1f-b1be-5060fc290609\") " Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.595067 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx" (OuterVolumeSpecName: "kube-api-access-5rfsx") pod "9808d05c-1692-4b1f-b1be-5060fc290609" (UID: "9808d05c-1692-4b1f-b1be-5060fc290609"). InnerVolumeSpecName "kube-api-access-5rfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.597247 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts" (OuterVolumeSpecName: "scripts") pod "9808d05c-1692-4b1f-b1be-5060fc290609" (UID: "9808d05c-1692-4b1f-b1be-5060fc290609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.611218 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.611452 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.624352 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9808d05c-1692-4b1f-b1be-5060fc290609" (UID: "9808d05c-1692-4b1f-b1be-5060fc290609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.624827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data" (OuterVolumeSpecName: "config-data") pod "9808d05c-1692-4b1f-b1be-5060fc290609" (UID: "9808d05c-1692-4b1f-b1be-5060fc290609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.630818 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.642738 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.691293 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.691326 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.691337 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9808d05c-1692-4b1f-b1be-5060fc290609-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.691345 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfsx\" (UniqueName: \"kubernetes.io/projected/9808d05c-1692-4b1f-b1be-5060fc290609-kube-api-access-5rfsx\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.715367 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:20:07 crc kubenswrapper[4869]: I0106 14:20:07.715659 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="dnsmasq-dns" containerID="cri-o://6117bc10fd17814766b0f4e171951d82d40b0ededb7497190e05cdf7f5c30e2e" gracePeriod=10 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.108249 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dh8p2" event={"ID":"9808d05c-1692-4b1f-b1be-5060fc290609","Type":"ContainerDied","Data":"2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6"} Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.108588 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d61176f47c5b01359739c385d6b63d9956821ab37c774a825c533713ff59ac6" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.108764 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dh8p2" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.120579 4869 generic.go:334] "Generic (PLEG): container finished" podID="798c903a-0423-4e97-a986-9b705bb64ad9" containerID="6117bc10fd17814766b0f4e171951d82d40b0ededb7497190e05cdf7f5c30e2e" exitCode=0 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.121681 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" event={"ID":"798c903a-0423-4e97-a986-9b705bb64ad9","Type":"ContainerDied","Data":"6117bc10fd17814766b0f4e171951d82d40b0ededb7497190e05cdf7f5c30e2e"} Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.151786 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.160150 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.303350 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.303603 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-log" containerID="cri-o://1328b103741f8aa046f192bbeb9defccb1fb83183ab89fa218846e53e49abfea" gracePeriod=30 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.303989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc\") pod \"798c903a-0423-4e97-a986-9b705bb64ad9\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.304027 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-api" containerID="cri-o://fbe21376fbb30002898422b545304b8279733bb9bd2d188fbe6d9453a8dc8bb2" gracePeriod=30 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.304051 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config\") pod \"798c903a-0423-4e97-a986-9b705bb64ad9\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.304398 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb\") pod \"798c903a-0423-4e97-a986-9b705bb64ad9\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.304472 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6t28\" (UniqueName: \"kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28\") pod \"798c903a-0423-4e97-a986-9b705bb64ad9\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.304548 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb\") pod \"798c903a-0423-4e97-a986-9b705bb64ad9\" (UID: \"798c903a-0423-4e97-a986-9b705bb64ad9\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.316797 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.316909 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.171:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.323626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28" (OuterVolumeSpecName: "kube-api-access-j6t28") pod "798c903a-0423-4e97-a986-9b705bb64ad9" (UID: "798c903a-0423-4e97-a986-9b705bb64ad9"). InnerVolumeSpecName "kube-api-access-j6t28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.343918 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.344405 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-log" containerID="cri-o://cb6c3f38b2b0d8bac1dbcd1c77d14745e44fb7740efd0eba42a7635672ef0772" gracePeriod=30 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.345461 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-metadata" containerID="cri-o://bb115ccef4c5ed8be3800b6d1e69f9641ba4947122699ffcf50f022a33db4d51" gracePeriod=30 Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.406958 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6t28\" (UniqueName: \"kubernetes.io/projected/798c903a-0423-4e97-a986-9b705bb64ad9-kube-api-access-j6t28\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.414961 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config" (OuterVolumeSpecName: "config") pod "798c903a-0423-4e97-a986-9b705bb64ad9" (UID: "798c903a-0423-4e97-a986-9b705bb64ad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.426230 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "798c903a-0423-4e97-a986-9b705bb64ad9" (UID: "798c903a-0423-4e97-a986-9b705bb64ad9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.460164 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "798c903a-0423-4e97-a986-9b705bb64ad9" (UID: "798c903a-0423-4e97-a986-9b705bb64ad9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.478203 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "798c903a-0423-4e97-a986-9b705bb64ad9" (UID: "798c903a-0423-4e97-a986-9b705bb64ad9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.508412 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.508451 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.508461 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.508471 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/798c903a-0423-4e97-a986-9b705bb64ad9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.707177 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.710589 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.812150 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle\") pod \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.812358 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data\") pod \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.812391 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts\") pod \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.812413 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm8vx\" (UniqueName: \"kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx\") pod \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\" (UID: \"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f\") " Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.817106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts" (OuterVolumeSpecName: "scripts") pod "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" (UID: "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.820931 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx" (OuterVolumeSpecName: "kube-api-access-qm8vx") pod "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" (UID: "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f"). InnerVolumeSpecName "kube-api-access-qm8vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.837316 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data" (OuterVolumeSpecName: "config-data") pod "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" (UID: "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.844060 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" (UID: "e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.914882 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.914920 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.914929 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:08 crc kubenswrapper[4869]: I0106 14:20:08.914937 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm8vx\" (UniqueName: \"kubernetes.io/projected/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f-kube-api-access-qm8vx\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.144995 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" event={"ID":"798c903a-0423-4e97-a986-9b705bb64ad9","Type":"ContainerDied","Data":"727a1a07656f76bd5c48473425171d691a2cdf757179c1295cee38e8520f797d"} Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.145047 4869 scope.go:117] "RemoveContainer" containerID="6117bc10fd17814766b0f4e171951d82d40b0ededb7497190e05cdf7f5c30e2e" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.145141 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-frkrl" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.147797 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerID="bb115ccef4c5ed8be3800b6d1e69f9641ba4947122699ffcf50f022a33db4d51" exitCode=0 Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.147831 4869 generic.go:334] "Generic (PLEG): container finished" podID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerID="cb6c3f38b2b0d8bac1dbcd1c77d14745e44fb7740efd0eba42a7635672ef0772" exitCode=143 Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.148204 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerDied","Data":"bb115ccef4c5ed8be3800b6d1e69f9641ba4947122699ffcf50f022a33db4d51"} Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.148243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerDied","Data":"cb6c3f38b2b0d8bac1dbcd1c77d14745e44fb7740efd0eba42a7635672ef0772"} Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.156880 4869 generic.go:334] "Generic (PLEG): container finished" podID="6ede238b-a65d-42e0-af52-4462756ca58a" containerID="1328b103741f8aa046f192bbeb9defccb1fb83183ab89fa218846e53e49abfea" exitCode=143 Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.156953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerDied","Data":"1328b103741f8aa046f192bbeb9defccb1fb83183ab89fa218846e53e49abfea"} Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.165908 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.171226 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rqkfr" event={"ID":"e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f","Type":"ContainerDied","Data":"1fd9f0889012c2165e610a81b6a4f2a5da23553c96622124efa202c3308c2ae1"} Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.171276 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd9f0889012c2165e610a81b6a4f2a5da23553c96622124efa202c3308c2ae1" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.194121 4869 scope.go:117] "RemoveContainer" containerID="ab38195008dce8827bc486d5644e9325e3a72d0d683b3b5277e0c7894325a7b8" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.213568 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 06 14:20:09 crc kubenswrapper[4869]: E0106 14:20:09.214368 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9808d05c-1692-4b1f-b1be-5060fc290609" containerName="nova-manage" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214382 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="9808d05c-1692-4b1f-b1be-5060fc290609" containerName="nova-manage" Jan 06 14:20:09 crc kubenswrapper[4869]: E0106 14:20:09.214397 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="dnsmasq-dns" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214405 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="dnsmasq-dns" Jan 06 14:20:09 crc kubenswrapper[4869]: E0106 14:20:09.214428 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" containerName="nova-cell1-conductor-db-sync" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214435 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" containerName="nova-cell1-conductor-db-sync" Jan 06 14:20:09 crc kubenswrapper[4869]: E0106 14:20:09.214447 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="init" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214473 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="init" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214633 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="9808d05c-1692-4b1f-b1be-5060fc290609" containerName="nova-manage" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214697 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" containerName="nova-cell1-conductor-db-sync" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.214715 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" containerName="dnsmasq-dns" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.215326 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.220261 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.242622 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.251455 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.259106 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-frkrl"] Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.298211 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.327341 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sx5l\" (UniqueName: \"kubernetes.io/projected/ca1929b8-a2a1-40cb-81d2-666f2687a69d-kube-api-access-2sx5l\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.327451 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.327507 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.428511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data\") pod \"0e600160-27c9-4d34-a068-60a8d85ba08a\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.428839 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs\") pod \"0e600160-27c9-4d34-a068-60a8d85ba08a\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.428874 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs\") pod \"0e600160-27c9-4d34-a068-60a8d85ba08a\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.428935 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqzb\" (UniqueName: \"kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb\") pod \"0e600160-27c9-4d34-a068-60a8d85ba08a\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.428961 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle\") pod \"0e600160-27c9-4d34-a068-60a8d85ba08a\" (UID: \"0e600160-27c9-4d34-a068-60a8d85ba08a\") " Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.429182 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sx5l\" (UniqueName: \"kubernetes.io/projected/ca1929b8-a2a1-40cb-81d2-666f2687a69d-kube-api-access-2sx5l\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.429295 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.429361 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.431220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs" (OuterVolumeSpecName: "logs") pod "0e600160-27c9-4d34-a068-60a8d85ba08a" (UID: "0e600160-27c9-4d34-a068-60a8d85ba08a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.434629 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.434826 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb" (OuterVolumeSpecName: "kube-api-access-ncqzb") pod "0e600160-27c9-4d34-a068-60a8d85ba08a" (UID: "0e600160-27c9-4d34-a068-60a8d85ba08a"). InnerVolumeSpecName "kube-api-access-ncqzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.436305 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca1929b8-a2a1-40cb-81d2-666f2687a69d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.447366 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sx5l\" (UniqueName: \"kubernetes.io/projected/ca1929b8-a2a1-40cb-81d2-666f2687a69d-kube-api-access-2sx5l\") pod \"nova-cell1-conductor-0\" (UID: \"ca1929b8-a2a1-40cb-81d2-666f2687a69d\") " pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.454490 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data" (OuterVolumeSpecName: "config-data") pod "0e600160-27c9-4d34-a068-60a8d85ba08a" (UID: "0e600160-27c9-4d34-a068-60a8d85ba08a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.458785 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e600160-27c9-4d34-a068-60a8d85ba08a" (UID: "0e600160-27c9-4d34-a068-60a8d85ba08a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.492447 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0e600160-27c9-4d34-a068-60a8d85ba08a" (UID: "0e600160-27c9-4d34-a068-60a8d85ba08a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.530547 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.530574 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.530586 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e600160-27c9-4d34-a068-60a8d85ba08a-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.530595 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncqzb\" (UniqueName: \"kubernetes.io/projected/0e600160-27c9-4d34-a068-60a8d85ba08a-kube-api-access-ncqzb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.530603 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e600160-27c9-4d34-a068-60a8d85ba08a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.540144 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.716456 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798c903a-0423-4e97-a986-9b705bb64ad9" path="/var/lib/kubelet/pods/798c903a-0423-4e97-a986-9b705bb64ad9/volumes" Jan 06 14:20:09 crc kubenswrapper[4869]: I0106 14:20:09.994476 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 06 14:20:10 crc kubenswrapper[4869]: W0106 14:20:10.000225 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca1929b8_a2a1_40cb_81d2_666f2687a69d.slice/crio-9b9be6059f92123798ec08a65f6077304af4e90fae7af7d0eab3b24335637139 WatchSource:0}: Error finding container 9b9be6059f92123798ec08a65f6077304af4e90fae7af7d0eab3b24335637139: Status 404 returned error can't find the container with id 9b9be6059f92123798ec08a65f6077304af4e90fae7af7d0eab3b24335637139 Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.175109 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca1929b8-a2a1-40cb-81d2-666f2687a69d","Type":"ContainerStarted","Data":"958ef29f2dbe1991bf461c99cdea5ff608045f26aca166f906befc0b3dbef1ac"} Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.175155 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca1929b8-a2a1-40cb-81d2-666f2687a69d","Type":"ContainerStarted","Data":"9b9be6059f92123798ec08a65f6077304af4e90fae7af7d0eab3b24335637139"} Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.175209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.179251 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.179325 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerName="nova-scheduler-scheduler" containerID="cri-o://1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" gracePeriod=30 Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.179365 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0e600160-27c9-4d34-a068-60a8d85ba08a","Type":"ContainerDied","Data":"a06a88c575387153bb64bce8c1aedbcf4ae72adee5a6c9efc7b8445ef1aa007b"} Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.179404 4869 scope.go:117] "RemoveContainer" containerID="bb115ccef4c5ed8be3800b6d1e69f9641ba4947122699ffcf50f022a33db4d51" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.197776 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.197758564 podStartE2EDuration="1.197758564s" podCreationTimestamp="2026-01-06 14:20:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:10.192930564 +0000 UTC m=+1228.732618248" watchObservedRunningTime="2026-01-06 14:20:10.197758564 +0000 UTC m=+1228.737446238" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.202292 4869 scope.go:117] "RemoveContainer" containerID="cb6c3f38b2b0d8bac1dbcd1c77d14745e44fb7740efd0eba42a7635672ef0772" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.221399 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.235036 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.253347 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:10 crc kubenswrapper[4869]: E0106 14:20:10.253799 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-log" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.256761 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-log" Jan 06 14:20:10 crc kubenswrapper[4869]: E0106 14:20:10.256809 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-metadata" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.256818 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-metadata" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.257135 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-metadata" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.257165 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" containerName="nova-metadata-log" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.258076 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.265264 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.265490 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.278861 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.349621 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.349687 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.349715 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpv6d\" (UniqueName: \"kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.349884 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.349938 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.451543 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.451661 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.451707 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.451732 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpv6d\" (UniqueName: \"kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.451787 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.452219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.456656 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.457180 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.458221 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.471327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpv6d\" (UniqueName: \"kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d\") pod \"nova-metadata-0\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " pod="openstack/nova-metadata-0" Jan 06 14:20:10 crc kubenswrapper[4869]: I0106 14:20:10.584786 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:20:11 crc kubenswrapper[4869]: I0106 14:20:11.049750 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:11 crc kubenswrapper[4869]: W0106 14:20:11.060421 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be226ea_ef19_4fbe_8a12_d72cef21b03c.slice/crio-9f0a0fafb17183044a95080e28ca53e5816d88389f66d87e613bf2af90385d70 WatchSource:0}: Error finding container 9f0a0fafb17183044a95080e28ca53e5816d88389f66d87e613bf2af90385d70: Status 404 returned error can't find the container with id 9f0a0fafb17183044a95080e28ca53e5816d88389f66d87e613bf2af90385d70 Jan 06 14:20:11 crc kubenswrapper[4869]: I0106 14:20:11.204974 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerStarted","Data":"9f0a0fafb17183044a95080e28ca53e5816d88389f66d87e613bf2af90385d70"} Jan 06 14:20:11 crc kubenswrapper[4869]: I0106 14:20:11.717293 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e600160-27c9-4d34-a068-60a8d85ba08a" path="/var/lib/kubelet/pods/0e600160-27c9-4d34-a068-60a8d85ba08a/volumes" Jan 06 14:20:12 crc kubenswrapper[4869]: I0106 14:20:12.232687 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerStarted","Data":"b94e02ab8799ec6b488c82121652307735464c599e4f2cc1c1153f3d4ab09509"} Jan 06 14:20:12 crc kubenswrapper[4869]: I0106 14:20:12.232734 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerStarted","Data":"279505af7c7bb12711e79c1388a42c8ed7a1fa97870fb9f768586825d99d73b7"} Jan 06 14:20:12 crc kubenswrapper[4869]: I0106 14:20:12.264626 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.264606648 podStartE2EDuration="2.264606648s" podCreationTimestamp="2026-01-06 14:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:12.252571019 +0000 UTC m=+1230.792258683" watchObservedRunningTime="2026-01-06 14:20:12.264606648 +0000 UTC m=+1230.804294312" Jan 06 14:20:12 crc kubenswrapper[4869]: E0106 14:20:12.612980 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:20:12 crc kubenswrapper[4869]: E0106 14:20:12.614111 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:20:12 crc kubenswrapper[4869]: E0106 14:20:12.616239 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:20:12 crc kubenswrapper[4869]: E0106 14:20:12.616299 4869 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerName="nova-scheduler-scheduler" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.252149 4869 generic.go:334] "Generic (PLEG): container finished" podID="6ede238b-a65d-42e0-af52-4462756ca58a" containerID="fbe21376fbb30002898422b545304b8279733bb9bd2d188fbe6d9453a8dc8bb2" exitCode=0 Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.252243 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerDied","Data":"fbe21376fbb30002898422b545304b8279733bb9bd2d188fbe6d9453a8dc8bb2"} Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.253077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6ede238b-a65d-42e0-af52-4462756ca58a","Type":"ContainerDied","Data":"20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b"} Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.253107 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20a419a6414b4cf5284e73123bb810dc77588d45919ac09627afc7020ce6c80b" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.255546 4869 generic.go:334] "Generic (PLEG): container finished" podID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerID="1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" exitCode=0 Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.255579 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10757c5c-36ff-41eb-bba7-d3ad5f372da9","Type":"ContainerDied","Data":"1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4"} Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.255596 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"10757c5c-36ff-41eb-bba7-d3ad5f372da9","Type":"ContainerDied","Data":"7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3"} Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.255606 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7add87320e99e3dd4c7a7769eae870afbb020437f48d2bcedd4471225f7a0fb3" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.259992 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.288091 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.374907 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data\") pod \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.375089 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjs26\" (UniqueName: \"kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26\") pod \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.375226 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle\") pod \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\" (UID: \"10757c5c-36ff-41eb-bba7-d3ad5f372da9\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.381106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26" (OuterVolumeSpecName: "kube-api-access-tjs26") pod "10757c5c-36ff-41eb-bba7-d3ad5f372da9" (UID: "10757c5c-36ff-41eb-bba7-d3ad5f372da9"). InnerVolumeSpecName "kube-api-access-tjs26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.401831 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10757c5c-36ff-41eb-bba7-d3ad5f372da9" (UID: "10757c5c-36ff-41eb-bba7-d3ad5f372da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.401984 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data" (OuterVolumeSpecName: "config-data") pod "10757c5c-36ff-41eb-bba7-d3ad5f372da9" (UID: "10757c5c-36ff-41eb-bba7-d3ad5f372da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.477315 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle\") pod \"6ede238b-a65d-42e0-af52-4462756ca58a\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.477443 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f4fx\" (UniqueName: \"kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx\") pod \"6ede238b-a65d-42e0-af52-4462756ca58a\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.477531 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data\") pod \"6ede238b-a65d-42e0-af52-4462756ca58a\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.477690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs\") pod \"6ede238b-a65d-42e0-af52-4462756ca58a\" (UID: \"6ede238b-a65d-42e0-af52-4462756ca58a\") " Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.478125 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.478150 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10757c5c-36ff-41eb-bba7-d3ad5f372da9-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.478161 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjs26\" (UniqueName: \"kubernetes.io/projected/10757c5c-36ff-41eb-bba7-d3ad5f372da9-kube-api-access-tjs26\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.478289 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs" (OuterVolumeSpecName: "logs") pod "6ede238b-a65d-42e0-af52-4462756ca58a" (UID: "6ede238b-a65d-42e0-af52-4462756ca58a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.480738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx" (OuterVolumeSpecName: "kube-api-access-8f4fx") pod "6ede238b-a65d-42e0-af52-4462756ca58a" (UID: "6ede238b-a65d-42e0-af52-4462756ca58a"). InnerVolumeSpecName "kube-api-access-8f4fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.501535 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ede238b-a65d-42e0-af52-4462756ca58a" (UID: "6ede238b-a65d-42e0-af52-4462756ca58a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.502016 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data" (OuterVolumeSpecName: "config-data") pod "6ede238b-a65d-42e0-af52-4462756ca58a" (UID: "6ede238b-a65d-42e0-af52-4462756ca58a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.579426 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.579466 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f4fx\" (UniqueName: \"kubernetes.io/projected/6ede238b-a65d-42e0-af52-4462756ca58a-kube-api-access-8f4fx\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.579480 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ede238b-a65d-42e0-af52-4462756ca58a-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:14 crc kubenswrapper[4869]: I0106 14:20:14.579489 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ede238b-a65d-42e0-af52-4462756ca58a-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.263006 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.263015 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.305007 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.336603 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.352818 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.368508 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.377273 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: E0106 14:20:15.377795 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerName="nova-scheduler-scheduler" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.377823 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerName="nova-scheduler-scheduler" Jan 06 14:20:15 crc kubenswrapper[4869]: E0106 14:20:15.377864 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-api" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.377874 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-api" Jan 06 14:20:15 crc kubenswrapper[4869]: E0106 14:20:15.377895 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-log" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.377904 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-log" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.378142 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-log" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.378187 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" containerName="nova-api-api" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.378205 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" containerName="nova-scheduler-scheduler" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.379411 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.381915 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.389137 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.403532 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.404741 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.406692 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.414603 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.493082 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.493196 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklc5\" (UniqueName: \"kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.493220 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.493259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.585824 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.585877 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595521 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595596 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595632 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595687 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xklc5\" (UniqueName: \"kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595707 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.595740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p7bf\" (UniqueName: \"kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.596405 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.599391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.600968 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.611155 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xklc5\" (UniqueName: \"kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5\") pod \"nova-api-0\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.697599 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p7bf\" (UniqueName: \"kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.697740 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.697771 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.699154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.701541 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.701738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.714754 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10757c5c-36ff-41eb-bba7-d3ad5f372da9" path="/var/lib/kubelet/pods/10757c5c-36ff-41eb-bba7-d3ad5f372da9/volumes" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.715408 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ede238b-a65d-42e0-af52-4462756ca58a" path="/var/lib/kubelet/pods/6ede238b-a65d-42e0-af52-4462756ca58a/volumes" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.716918 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p7bf\" (UniqueName: \"kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf\") pod \"nova-scheduler-0\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " pod="openstack/nova-scheduler-0" Jan 06 14:20:15 crc kubenswrapper[4869]: I0106 14:20:15.724872 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:20:16 crc kubenswrapper[4869]: I0106 14:20:16.166521 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:16 crc kubenswrapper[4869]: W0106 14:20:16.178149 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4046ec6f_300f_4060_986a_e7fdbb596003.slice/crio-627c6a2485a26ca7634f557f1df213f88ce068cd85236b02add14f119fb69cdc WatchSource:0}: Error finding container 627c6a2485a26ca7634f557f1df213f88ce068cd85236b02add14f119fb69cdc: Status 404 returned error can't find the container with id 627c6a2485a26ca7634f557f1df213f88ce068cd85236b02add14f119fb69cdc Jan 06 14:20:16 crc kubenswrapper[4869]: I0106 14:20:16.255024 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:16 crc kubenswrapper[4869]: W0106 14:20:16.257195 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f4c37f2_9138_43a0_af83_89ceee8c250e.slice/crio-67b5c7a4d1b768444664725b4bdb8e5ae313c3db1a1d4a71a149931026fd5931 WatchSource:0}: Error finding container 67b5c7a4d1b768444664725b4bdb8e5ae313c3db1a1d4a71a149931026fd5931: Status 404 returned error can't find the container with id 67b5c7a4d1b768444664725b4bdb8e5ae313c3db1a1d4a71a149931026fd5931 Jan 06 14:20:16 crc kubenswrapper[4869]: I0106 14:20:16.273315 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerStarted","Data":"627c6a2485a26ca7634f557f1df213f88ce068cd85236b02add14f119fb69cdc"} Jan 06 14:20:16 crc kubenswrapper[4869]: I0106 14:20:16.274838 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f4c37f2-9138-43a0-af83-89ceee8c250e","Type":"ContainerStarted","Data":"67b5c7a4d1b768444664725b4bdb8e5ae313c3db1a1d4a71a149931026fd5931"} Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.187732 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.285770 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerStarted","Data":"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1"} Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.285830 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerStarted","Data":"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0"} Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.290788 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f4c37f2-9138-43a0-af83-89ceee8c250e","Type":"ContainerStarted","Data":"1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7"} Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.327164 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.327144922 podStartE2EDuration="2.327144922s" podCreationTimestamp="2026-01-06 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:17.305357613 +0000 UTC m=+1235.845045287" watchObservedRunningTime="2026-01-06 14:20:17.327144922 +0000 UTC m=+1235.866832586" Jan 06 14:20:17 crc kubenswrapper[4869]: I0106 14:20:17.352651 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.352622791 podStartE2EDuration="2.352622791s" podCreationTimestamp="2026-01-06 14:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:17.334756829 +0000 UTC m=+1235.874444503" watchObservedRunningTime="2026-01-06 14:20:17.352622791 +0000 UTC m=+1235.892310465" Jan 06 14:20:19 crc kubenswrapper[4869]: I0106 14:20:19.576875 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 06 14:20:19 crc kubenswrapper[4869]: I0106 14:20:19.588053 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:19 crc kubenswrapper[4869]: I0106 14:20:19.588340 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="92078172-9112-49c9-91a9-d694a11411c1" containerName="kube-state-metrics" containerID="cri-o://a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933" gracePeriod=30 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.044526 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.175595 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7n7s\" (UniqueName: \"kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s\") pod \"92078172-9112-49c9-91a9-d694a11411c1\" (UID: \"92078172-9112-49c9-91a9-d694a11411c1\") " Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.193575 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s" (OuterVolumeSpecName: "kube-api-access-b7n7s") pod "92078172-9112-49c9-91a9-d694a11411c1" (UID: "92078172-9112-49c9-91a9-d694a11411c1"). InnerVolumeSpecName "kube-api-access-b7n7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.277370 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7n7s\" (UniqueName: \"kubernetes.io/projected/92078172-9112-49c9-91a9-d694a11411c1-kube-api-access-b7n7s\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.314143 4869 generic.go:334] "Generic (PLEG): container finished" podID="92078172-9112-49c9-91a9-d694a11411c1" containerID="a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933" exitCode=2 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.314200 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92078172-9112-49c9-91a9-d694a11411c1","Type":"ContainerDied","Data":"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933"} Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.314201 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.314243 4869 scope.go:117] "RemoveContainer" containerID="a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.314231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"92078172-9112-49c9-91a9-d694a11411c1","Type":"ContainerDied","Data":"a81daa19b5004666eadbb34c1bc68d3b8fc3a1b8cec315d56503abeff7f9cc4c"} Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.335541 4869 scope.go:117] "RemoveContainer" containerID="a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933" Jan 06 14:20:20 crc kubenswrapper[4869]: E0106 14:20:20.335981 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933\": container with ID starting with a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933 not found: ID does not exist" containerID="a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.336025 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933"} err="failed to get container status \"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933\": rpc error: code = NotFound desc = could not find container \"a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933\": container with ID starting with a0df49e4d5672faed740a1dd8a87206648915ac18f259c76a82236e0f8dfb933 not found: ID does not exist" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.349889 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.360576 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.370798 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:20 crc kubenswrapper[4869]: E0106 14:20:20.371323 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92078172-9112-49c9-91a9-d694a11411c1" containerName="kube-state-metrics" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.371353 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="92078172-9112-49c9-91a9-d694a11411c1" containerName="kube-state-metrics" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.371614 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="92078172-9112-49c9-91a9-d694a11411c1" containerName="kube-state-metrics" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.372849 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.374881 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.377360 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.382970 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.480476 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.480535 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.481345 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmv8q\" (UniqueName: \"kubernetes.io/projected/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-api-access-tmv8q\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.481471 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.583685 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmv8q\" (UniqueName: \"kubernetes.io/projected/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-api-access-tmv8q\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.583768 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.583842 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.583874 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.585514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.585567 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.588310 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.594238 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.600268 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.601184 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmv8q\" (UniqueName: \"kubernetes.io/projected/556f7f3f-b9e0-4e69-a659-5ef5d052a7b4-kube-api-access-tmv8q\") pod \"kube-state-metrics-0\" (UID: \"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4\") " pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.619637 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.619911 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-central-agent" containerID="cri-o://72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483" gracePeriod=30 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.620289 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="proxy-httpd" containerID="cri-o://36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc" gracePeriod=30 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.620340 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="sg-core" containerID="cri-o://3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610" gracePeriod=30 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.620369 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-notification-agent" containerID="cri-o://00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea" gracePeriod=30 Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.690200 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 06 14:20:20 crc kubenswrapper[4869]: I0106 14:20:20.726875 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 06 14:20:21 crc kubenswrapper[4869]: W0106 14:20:21.153497 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod556f7f3f_b9e0_4e69_a659_5ef5d052a7b4.slice/crio-a2181c398876d3a7456c5ed4a3962cf3f4e3a4be737a33cd96a992a6e488542d WatchSource:0}: Error finding container a2181c398876d3a7456c5ed4a3962cf3f4e3a4be737a33cd96a992a6e488542d: Status 404 returned error can't find the container with id a2181c398876d3a7456c5ed4a3962cf3f4e3a4be737a33cd96a992a6e488542d Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.156933 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.161202 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324891 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f526de6-6318-47b6-842b-761a6161f704" containerID="36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc" exitCode=0 Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324924 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f526de6-6318-47b6-842b-761a6161f704" containerID="3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610" exitCode=2 Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324932 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f526de6-6318-47b6-842b-761a6161f704" containerID="72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483" exitCode=0 Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324965 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerDied","Data":"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc"} Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324988 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerDied","Data":"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610"} Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.324997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerDied","Data":"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483"} Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.326414 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4","Type":"ContainerStarted","Data":"a2181c398876d3a7456c5ed4a3962cf3f4e3a4be737a33cd96a992a6e488542d"} Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.597989 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.598013 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:21 crc kubenswrapper[4869]: I0106 14:20:21.719391 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92078172-9112-49c9-91a9-d694a11411c1" path="/var/lib/kubelet/pods/92078172-9112-49c9-91a9-d694a11411c1/volumes" Jan 06 14:20:22 crc kubenswrapper[4869]: I0106 14:20:22.335580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"556f7f3f-b9e0-4e69-a659-5ef5d052a7b4","Type":"ContainerStarted","Data":"a129e96b1d3ed4ce89fcd15d854798e95083e379f03f8d359155c022245a5bf8"} Jan 06 14:20:22 crc kubenswrapper[4869]: I0106 14:20:22.336998 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 06 14:20:22 crc kubenswrapper[4869]: I0106 14:20:22.361084 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.009827792 podStartE2EDuration="2.361063958s" podCreationTimestamp="2026-01-06 14:20:20 +0000 UTC" firstStartedPulling="2026-01-06 14:20:21.156639252 +0000 UTC m=+1239.696326926" lastFinishedPulling="2026-01-06 14:20:21.507875428 +0000 UTC m=+1240.047563092" observedRunningTime="2026-01-06 14:20:22.357798177 +0000 UTC m=+1240.897485881" watchObservedRunningTime="2026-01-06 14:20:22.361063958 +0000 UTC m=+1240.900751632" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.316132 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.369438 4869 generic.go:334] "Generic (PLEG): container finished" podID="7f526de6-6318-47b6-842b-761a6161f704" containerID="00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea" exitCode=0 Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.369543 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerDied","Data":"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea"} Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.369581 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7f526de6-6318-47b6-842b-761a6161f704","Type":"ContainerDied","Data":"f8d9082a615624c5847cf22d425739d4377cae540fa22e20addd5152300410a2"} Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.369578 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.369619 4869 scope.go:117] "RemoveContainer" containerID="36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.409716 4869 scope.go:117] "RemoveContainer" containerID="3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.434174 4869 scope.go:117] "RemoveContainer" containerID="00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.452644 4869 scope.go:117] "RemoveContainer" containerID="72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.474052 4869 scope.go:117] "RemoveContainer" containerID="36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.474498 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc\": container with ID starting with 36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc not found: ID does not exist" containerID="36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.474544 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc"} err="failed to get container status \"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc\": rpc error: code = NotFound desc = could not find container \"36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc\": container with ID starting with 36a75164ec476f960b7b579823f49b65db9a3d1ed21eecee2a8800dd64e507fc not found: ID does not exist" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.474571 4869 scope.go:117] "RemoveContainer" containerID="3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.475041 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610\": container with ID starting with 3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610 not found: ID does not exist" containerID="3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.475097 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610"} err="failed to get container status \"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610\": rpc error: code = NotFound desc = could not find container \"3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610\": container with ID starting with 3a27a1298b046d985057c6e688ba18149efcbed06dad89cdb79b2b86c23d2610 not found: ID does not exist" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.475134 4869 scope.go:117] "RemoveContainer" containerID="00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.475451 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea\": container with ID starting with 00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea not found: ID does not exist" containerID="00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.475488 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea"} err="failed to get container status \"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea\": rpc error: code = NotFound desc = could not find container \"00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea\": container with ID starting with 00d75401413110223a5e0d08b8226540003d0fba8dc8d6f66c671056683217ea not found: ID does not exist" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.475504 4869 scope.go:117] "RemoveContainer" containerID="72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.475770 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483\": container with ID starting with 72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483 not found: ID does not exist" containerID="72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.475792 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483"} err="failed to get container status \"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483\": rpc error: code = NotFound desc = could not find container \"72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483\": container with ID starting with 72765ec111af53e087747ece17e69354b55073c891f6acf387d2a430cd48b483 not found: ID does not exist" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484236 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484340 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484376 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484435 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gczht\" (UniqueName: \"kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.485040 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.484545 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.485197 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd\") pod \"7f526de6-6318-47b6-842b-761a6161f704\" (UID: \"7f526de6-6318-47b6-842b-761a6161f704\") " Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.485503 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.485625 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.485638 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f526de6-6318-47b6-842b-761a6161f704-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.489881 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht" (OuterVolumeSpecName: "kube-api-access-gczht") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "kube-api-access-gczht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.489909 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts" (OuterVolumeSpecName: "scripts") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.510004 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.566106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.566537 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data" (OuterVolumeSpecName: "config-data") pod "7f526de6-6318-47b6-842b-761a6161f704" (UID: "7f526de6-6318-47b6-842b-761a6161f704"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.586902 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.586929 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.586942 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.586956 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gczht\" (UniqueName: \"kubernetes.io/projected/7f526de6-6318-47b6-842b-761a6161f704-kube-api-access-gczht\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.586969 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f526de6-6318-47b6-842b-761a6161f704-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.703547 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.729651 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.730041 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.730072 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.734366 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.752379 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.752949 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-central-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.752980 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-central-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.753012 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-notification-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753024 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-notification-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.753059 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="sg-core" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753069 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="sg-core" Jan 06 14:20:25 crc kubenswrapper[4869]: E0106 14:20:25.753090 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="proxy-httpd" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753098 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="proxy-httpd" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753347 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="sg-core" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753399 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-notification-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753435 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="ceilometer-central-agent" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.753468 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f526de6-6318-47b6-842b-761a6161f704" containerName="proxy-httpd" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.755779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.759159 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.759210 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.761159 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.762884 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.771767 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893142 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893218 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893339 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893481 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893770 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.893844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc2gt\" (UniqueName: \"kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.995926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996006 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996037 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc2gt\" (UniqueName: \"kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996085 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996126 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996155 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996177 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.996199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.997124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:25 crc kubenswrapper[4869]: I0106 14:20:25.997396 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.000969 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.001031 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.001260 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.002026 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.004942 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.021872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc2gt\" (UniqueName: \"kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt\") pod \"ceilometer-0\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.097728 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.407298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 06 14:20:26 crc kubenswrapper[4869]: W0106 14:20:26.558244 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad2fbe98_b898_4e47_88b0_d2983f9044dc.slice/crio-b347750ed57e160e18a0c91ed638c37e311c7a994532fbb9d4429716814424cc WatchSource:0}: Error finding container b347750ed57e160e18a0c91ed638c37e311c7a994532fbb9d4429716814424cc: Status 404 returned error can't find the container with id b347750ed57e160e18a0c91ed638c37e311c7a994532fbb9d4429716814424cc Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.573708 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.785806 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:26 crc kubenswrapper[4869]: I0106 14:20:26.786063 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.179:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:27 crc kubenswrapper[4869]: I0106 14:20:27.410494 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerStarted","Data":"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57"} Jan 06 14:20:27 crc kubenswrapper[4869]: I0106 14:20:27.411721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerStarted","Data":"b347750ed57e160e18a0c91ed638c37e311c7a994532fbb9d4429716814424cc"} Jan 06 14:20:27 crc kubenswrapper[4869]: I0106 14:20:27.732696 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f526de6-6318-47b6-842b-761a6161f704" path="/var/lib/kubelet/pods/7f526de6-6318-47b6-842b-761a6161f704/volumes" Jan 06 14:20:28 crc kubenswrapper[4869]: I0106 14:20:28.425540 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerStarted","Data":"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc"} Jan 06 14:20:29 crc kubenswrapper[4869]: I0106 14:20:29.437985 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerStarted","Data":"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476"} Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.449129 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerStarted","Data":"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee"} Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.451783 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.474313 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.027083894 podStartE2EDuration="5.474278829s" podCreationTimestamp="2026-01-06 14:20:25 +0000 UTC" firstStartedPulling="2026-01-06 14:20:26.561448851 +0000 UTC m=+1245.101136515" lastFinishedPulling="2026-01-06 14:20:30.008643786 +0000 UTC m=+1248.548331450" observedRunningTime="2026-01-06 14:20:30.469965713 +0000 UTC m=+1249.009653367" watchObservedRunningTime="2026-01-06 14:20:30.474278829 +0000 UTC m=+1249.013966493" Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.592827 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.596801 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.605219 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 06 14:20:30 crc kubenswrapper[4869]: I0106 14:20:30.701685 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 06 14:20:31 crc kubenswrapper[4869]: I0106 14:20:31.468028 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.493190 4869 generic.go:334] "Generic (PLEG): container finished" podID="45dbc745-6a90-4773-8dcf-31d57de4f384" containerID="6d75bd8622f014916acb8f34fa72bba5a875363533ba5f5e4882350b4ea586c2" exitCode=137 Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.495062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45dbc745-6a90-4773-8dcf-31d57de4f384","Type":"ContainerDied","Data":"6d75bd8622f014916acb8f34fa72bba5a875363533ba5f5e4882350b4ea586c2"} Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.495198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"45dbc745-6a90-4773-8dcf-31d57de4f384","Type":"ContainerDied","Data":"bb17f43203c3760b64fe04a2b7049d9dd276b16659e413f9335563ea2a2d808a"} Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.495225 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb17f43203c3760b64fe04a2b7049d9dd276b16659e413f9335563ea2a2d808a" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.528266 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.648975 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data\") pod \"45dbc745-6a90-4773-8dcf-31d57de4f384\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.649073 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle\") pod \"45dbc745-6a90-4773-8dcf-31d57de4f384\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.649126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lz8g\" (UniqueName: \"kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g\") pod \"45dbc745-6a90-4773-8dcf-31d57de4f384\" (UID: \"45dbc745-6a90-4773-8dcf-31d57de4f384\") " Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.658738 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g" (OuterVolumeSpecName: "kube-api-access-9lz8g") pod "45dbc745-6a90-4773-8dcf-31d57de4f384" (UID: "45dbc745-6a90-4773-8dcf-31d57de4f384"). InnerVolumeSpecName "kube-api-access-9lz8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.706852 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45dbc745-6a90-4773-8dcf-31d57de4f384" (UID: "45dbc745-6a90-4773-8dcf-31d57de4f384"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.707057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data" (OuterVolumeSpecName: "config-data") pod "45dbc745-6a90-4773-8dcf-31d57de4f384" (UID: "45dbc745-6a90-4773-8dcf-31d57de4f384"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.751061 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lz8g\" (UniqueName: \"kubernetes.io/projected/45dbc745-6a90-4773-8dcf-31d57de4f384-kube-api-access-9lz8g\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.751324 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:33 crc kubenswrapper[4869]: I0106 14:20:33.751334 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45dbc745-6a90-4773-8dcf-31d57de4f384-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.504254 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.533341 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.555068 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.565513 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:34 crc kubenswrapper[4869]: E0106 14:20:34.565983 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45dbc745-6a90-4773-8dcf-31d57de4f384" containerName="nova-cell1-novncproxy-novncproxy" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.566008 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="45dbc745-6a90-4773-8dcf-31d57de4f384" containerName="nova-cell1-novncproxy-novncproxy" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.566215 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="45dbc745-6a90-4773-8dcf-31d57de4f384" containerName="nova-cell1-novncproxy-novncproxy" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.567007 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.573401 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.574278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.574490 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.574741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.666584 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.666841 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.667293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hklx\" (UniqueName: \"kubernetes.io/projected/76afcc40-ce0e-43d9-8166-a5c8070f8245-kube-api-access-2hklx\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.667366 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.667437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.770186 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hklx\" (UniqueName: \"kubernetes.io/projected/76afcc40-ce0e-43d9-8166-a5c8070f8245-kube-api-access-2hklx\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.770432 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.770496 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.771489 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.771557 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.775867 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.776543 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.777742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.787077 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76afcc40-ce0e-43d9-8166-a5c8070f8245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.805022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hklx\" (UniqueName: \"kubernetes.io/projected/76afcc40-ce0e-43d9-8166-a5c8070f8245-kube-api-access-2hklx\") pod \"nova-cell1-novncproxy-0\" (UID: \"76afcc40-ce0e-43d9-8166-a5c8070f8245\") " pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:34 crc kubenswrapper[4869]: I0106 14:20:34.917248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.355809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.512568 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"76afcc40-ce0e-43d9-8166-a5c8070f8245","Type":"ContainerStarted","Data":"b088e195eed758b7cafe30113d2f8163483c85a4d6e687d86e6a7d7d886f0124"} Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.702979 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.704487 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.717579 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45dbc745-6a90-4773-8dcf-31d57de4f384" path="/var/lib/kubelet/pods/45dbc745-6a90-4773-8dcf-31d57de4f384/volumes" Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.718501 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 06 14:20:35 crc kubenswrapper[4869]: I0106 14:20:35.718548 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.529626 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"76afcc40-ce0e-43d9-8166-a5c8070f8245","Type":"ContainerStarted","Data":"96a2453b8d69fc1c30a4549baeaa85a2b8425d90ad743af5fbc67b5b387591b0"} Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.532117 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.535787 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.561480 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.561440707 podStartE2EDuration="2.561440707s" podCreationTimestamp="2026-01-06 14:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:36.549327098 +0000 UTC m=+1255.089014752" watchObservedRunningTime="2026-01-06 14:20:36.561440707 +0000 UTC m=+1255.101128371" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.716292 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.717983 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.738003 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.909001 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd59z\" (UniqueName: \"kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.909054 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.909136 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.909204 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:36 crc kubenswrapper[4869]: I0106 14:20:36.909269 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.010818 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.010888 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd59z\" (UniqueName: \"kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.010912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.010966 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.011024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.011843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.012387 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.013149 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.013643 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.034898 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd59z\" (UniqueName: \"kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z\") pod \"dnsmasq-dns-5b856c5697-2gz7t\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.081240 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:37 crc kubenswrapper[4869]: I0106 14:20:37.549427 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:20:37 crc kubenswrapper[4869]: W0106 14:20:37.554799 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb1f8717_036d_410e_bd16_8c42c4c9252b.slice/crio-1b6950ff568a571da629e66c3e48702d916c44a11a6af9b3a420eb1989648437 WatchSource:0}: Error finding container 1b6950ff568a571da629e66c3e48702d916c44a11a6af9b3a420eb1989648437: Status 404 returned error can't find the container with id 1b6950ff568a571da629e66c3e48702d916c44a11a6af9b3a420eb1989648437 Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.561308 4869 generic.go:334] "Generic (PLEG): container finished" podID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerID="f9d58cfe19c7627fc63b0f6b13123144de272bc6dd8a4c8493d0fac5479f2e67" exitCode=0 Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.561447 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" event={"ID":"fb1f8717-036d-410e-bd16-8c42c4c9252b","Type":"ContainerDied","Data":"f9d58cfe19c7627fc63b0f6b13123144de272bc6dd8a4c8493d0fac5479f2e67"} Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.561673 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" event={"ID":"fb1f8717-036d-410e-bd16-8c42c4c9252b","Type":"ContainerStarted","Data":"1b6950ff568a571da629e66c3e48702d916c44a11a6af9b3a420eb1989648437"} Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.782256 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.858492 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.858945 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="proxy-httpd" containerID="cri-o://e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee" gracePeriod=30 Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.858964 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-notification-agent" containerID="cri-o://ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc" gracePeriod=30 Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.858985 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-central-agent" containerID="cri-o://961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57" gracePeriod=30 Jan 06 14:20:38 crc kubenswrapper[4869]: I0106 14:20:38.858957 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="sg-core" containerID="cri-o://1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476" gracePeriod=30 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577039 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerID="e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee" exitCode=0 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577345 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerID="1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476" exitCode=2 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577355 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerID="961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57" exitCode=0 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerDied","Data":"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee"} Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577436 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerDied","Data":"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476"} Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.577455 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerDied","Data":"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57"} Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.584332 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" event={"ID":"fb1f8717-036d-410e-bd16-8c42c4c9252b","Type":"ContainerStarted","Data":"ceb8996443b815bcd8ee5132f6397a5614ea85ca5e3350adebfc9c05987772f3"} Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.584482 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-log" containerID="cri-o://f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0" gracePeriod=30 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.584531 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-api" containerID="cri-o://7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1" gracePeriod=30 Jan 06 14:20:39 crc kubenswrapper[4869]: I0106 14:20:39.918122 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.357469 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.389196 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" podStartSLOduration=4.3891758339999996 podStartE2EDuration="4.389175834s" podCreationTimestamp="2026-01-06 14:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:39.616212298 +0000 UTC m=+1258.155899972" watchObservedRunningTime="2026-01-06 14:20:40.389175834 +0000 UTC m=+1258.928863498" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.477873 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.478398 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.479144 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.478641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.479389 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.479911 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc2gt\" (UniqueName: \"kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.480087 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.480193 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.480360 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data\") pod \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\" (UID: \"ad2fbe98-b898-4e47-88b0-d2983f9044dc\") " Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.480772 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.481763 4869 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.481879 4869 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad2fbe98-b898-4e47-88b0-d2983f9044dc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.485010 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts" (OuterVolumeSpecName: "scripts") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.485405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt" (OuterVolumeSpecName: "kube-api-access-nc2gt") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "kube-api-access-nc2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.516218 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.542520 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.583590 4869 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.584582 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc2gt\" (UniqueName: \"kubernetes.io/projected/ad2fbe98-b898-4e47-88b0-d2983f9044dc-kube-api-access-nc2gt\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.584688 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.584773 4869 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.597885 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.601426 4869 generic.go:334] "Generic (PLEG): container finished" podID="4046ec6f-300f-4060-986a-e7fdbb596003" containerID="f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0" exitCode=143 Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.601472 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerDied","Data":"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0"} Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.605409 4869 generic.go:334] "Generic (PLEG): container finished" podID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerID="ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc" exitCode=0 Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.606397 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data" (OuterVolumeSpecName: "config-data") pod "ad2fbe98-b898-4e47-88b0-d2983f9044dc" (UID: "ad2fbe98-b898-4e47-88b0-d2983f9044dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.606934 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.606935 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerDied","Data":"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc"} Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.607306 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad2fbe98-b898-4e47-88b0-d2983f9044dc","Type":"ContainerDied","Data":"b347750ed57e160e18a0c91ed638c37e311c7a994532fbb9d4429716814424cc"} Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.607385 4869 scope.go:117] "RemoveContainer" containerID="e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.608047 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.652192 4869 scope.go:117] "RemoveContainer" containerID="1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.667090 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.684538 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.687460 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.687499 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad2fbe98-b898-4e47-88b0-d2983f9044dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.697464 4869 scope.go:117] "RemoveContainer" containerID="ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.701307 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.701769 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-central-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.701792 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-central-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.701811 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="sg-core" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.701819 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="sg-core" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.701833 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-notification-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.701839 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-notification-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.701857 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="proxy-httpd" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.701863 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="proxy-httpd" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.702027 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="sg-core" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.702043 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-central-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.702062 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="proxy-httpd" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.702070 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" containerName="ceilometer-notification-agent" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.704001 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.709060 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.709127 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.709264 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.710080 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.735725 4869 scope.go:117] "RemoveContainer" containerID="961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.753051 4869 scope.go:117] "RemoveContainer" containerID="e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.753480 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee\": container with ID starting with e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee not found: ID does not exist" containerID="e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.753509 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee"} err="failed to get container status \"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee\": rpc error: code = NotFound desc = could not find container \"e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee\": container with ID starting with e7121d96cf8fe67d8ff2221d6e124551bbc81c0b1f9d024388749abd294cb2ee not found: ID does not exist" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.753531 4869 scope.go:117] "RemoveContainer" containerID="1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.754134 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476\": container with ID starting with 1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476 not found: ID does not exist" containerID="1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.754168 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476"} err="failed to get container status \"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476\": rpc error: code = NotFound desc = could not find container \"1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476\": container with ID starting with 1f834717eb16413e6ee386baa79e020265ab721229e2751bcaf6f836978de476 not found: ID does not exist" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.754187 4869 scope.go:117] "RemoveContainer" containerID="ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.754538 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc\": container with ID starting with ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc not found: ID does not exist" containerID="ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.754559 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc"} err="failed to get container status \"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc\": rpc error: code = NotFound desc = could not find container \"ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc\": container with ID starting with ff7c7aa85949853ad3cfdfeb849219b070dcf8a229c09c46c2ae5f523c4dffbc not found: ID does not exist" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.754571 4869 scope.go:117] "RemoveContainer" containerID="961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57" Jan 06 14:20:40 crc kubenswrapper[4869]: E0106 14:20:40.754922 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57\": container with ID starting with 961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57 not found: ID does not exist" containerID="961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.754947 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57"} err="failed to get container status \"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57\": rpc error: code = NotFound desc = could not find container \"961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57\": container with ID starting with 961dc8089aa9f1223bd9e5293a5ad968bb4f368b22537c89a86aa95296ccef57 not found: ID does not exist" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789166 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvm7r\" (UniqueName: \"kubernetes.io/projected/cdd7985d-7085-4e06-9be1-e35e94d9c544-kube-api-access-cvm7r\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789257 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-config-data\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789295 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-scripts\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789316 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789362 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789409 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789436 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.789517 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.891843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-config-data\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.891912 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-scripts\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.891952 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.891994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.892047 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.892071 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.892117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.892167 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvm7r\" (UniqueName: \"kubernetes.io/projected/cdd7985d-7085-4e06-9be1-e35e94d9c544-kube-api-access-cvm7r\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.893082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-run-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.893082 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdd7985d-7085-4e06-9be1-e35e94d9c544-log-httpd\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.897262 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.897769 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.898821 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.899179 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-scripts\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.901067 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdd7985d-7085-4e06-9be1-e35e94d9c544-config-data\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:40 crc kubenswrapper[4869]: I0106 14:20:40.912768 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvm7r\" (UniqueName: \"kubernetes.io/projected/cdd7985d-7085-4e06-9be1-e35e94d9c544-kube-api-access-cvm7r\") pod \"ceilometer-0\" (UID: \"cdd7985d-7085-4e06-9be1-e35e94d9c544\") " pod="openstack/ceilometer-0" Jan 06 14:20:41 crc kubenswrapper[4869]: I0106 14:20:41.024816 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 06 14:20:41 crc kubenswrapper[4869]: I0106 14:20:41.473517 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 06 14:20:41 crc kubenswrapper[4869]: I0106 14:20:41.615467 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"fdd25caac900b36ec020c6ff2b542a66da77bd2c6e4b216c786ea28679c457a8"} Jan 06 14:20:41 crc kubenswrapper[4869]: I0106 14:20:41.721999 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad2fbe98-b898-4e47-88b0-d2983f9044dc" path="/var/lib/kubelet/pods/ad2fbe98-b898-4e47-88b0-d2983f9044dc/volumes" Jan 06 14:20:42 crc kubenswrapper[4869]: E0106 14:20:42.974902 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4046ec6f_300f_4060_986a_e7fdbb596003.slice/crio-7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1.scope\": RecentStats: unable to find data in memory cache]" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.216994 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.360816 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data\") pod \"4046ec6f-300f-4060-986a-e7fdbb596003\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.360860 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs\") pod \"4046ec6f-300f-4060-986a-e7fdbb596003\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.360928 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xklc5\" (UniqueName: \"kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5\") pod \"4046ec6f-300f-4060-986a-e7fdbb596003\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.360987 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle\") pod \"4046ec6f-300f-4060-986a-e7fdbb596003\" (UID: \"4046ec6f-300f-4060-986a-e7fdbb596003\") " Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.361550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs" (OuterVolumeSpecName: "logs") pod "4046ec6f-300f-4060-986a-e7fdbb596003" (UID: "4046ec6f-300f-4060-986a-e7fdbb596003"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.367598 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5" (OuterVolumeSpecName: "kube-api-access-xklc5") pod "4046ec6f-300f-4060-986a-e7fdbb596003" (UID: "4046ec6f-300f-4060-986a-e7fdbb596003"). InnerVolumeSpecName "kube-api-access-xklc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.396845 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4046ec6f-300f-4060-986a-e7fdbb596003" (UID: "4046ec6f-300f-4060-986a-e7fdbb596003"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.400810 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data" (OuterVolumeSpecName: "config-data") pod "4046ec6f-300f-4060-986a-e7fdbb596003" (UID: "4046ec6f-300f-4060-986a-e7fdbb596003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.463359 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.463397 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4046ec6f-300f-4060-986a-e7fdbb596003-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.463406 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xklc5\" (UniqueName: \"kubernetes.io/projected/4046ec6f-300f-4060-986a-e7fdbb596003-kube-api-access-xklc5\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.464100 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4046ec6f-300f-4060-986a-e7fdbb596003-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.757401 4869 generic.go:334] "Generic (PLEG): container finished" podID="4046ec6f-300f-4060-986a-e7fdbb596003" containerID="7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1" exitCode=0 Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.757601 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerDied","Data":"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1"} Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.758287 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4046ec6f-300f-4060-986a-e7fdbb596003","Type":"ContainerDied","Data":"627c6a2485a26ca7634f557f1df213f88ce068cd85236b02add14f119fb69cdc"} Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.758312 4869 scope.go:117] "RemoveContainer" containerID="7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.757704 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.760523 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"ba02e9514dd46eb3eaf5b645a5501dbdde41760ced908ad98f726d647b9072fe"} Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.862169 4869 scope.go:117] "RemoveContainer" containerID="f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.905265 4869 scope.go:117] "RemoveContainer" containerID="7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.905423 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:43 crc kubenswrapper[4869]: E0106 14:20:43.906842 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1\": container with ID starting with 7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1 not found: ID does not exist" containerID="7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.906896 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1"} err="failed to get container status \"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1\": rpc error: code = NotFound desc = could not find container \"7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1\": container with ID starting with 7ecbd90d07f4d73084110339311141a5bf91630ea19b6e4c0f8b474c9047b0f1 not found: ID does not exist" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.906923 4869 scope.go:117] "RemoveContainer" containerID="f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0" Jan 06 14:20:43 crc kubenswrapper[4869]: E0106 14:20:43.907234 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0\": container with ID starting with f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0 not found: ID does not exist" containerID="f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.907267 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0"} err="failed to get container status \"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0\": rpc error: code = NotFound desc = could not find container \"f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0\": container with ID starting with f655d6ba82425ddaed19e7f15cc94559263278a8e7648d2b9e7560edc8310ea0 not found: ID does not exist" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.914981 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.930350 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:43 crc kubenswrapper[4869]: E0106 14:20:43.930878 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-api" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.930958 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-api" Jan 06 14:20:43 crc kubenswrapper[4869]: E0106 14:20:43.931027 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-log" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.931084 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-log" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.931303 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-log" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.931367 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" containerName="nova-api-api" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.932649 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.936031 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.936903 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.937721 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.940474 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975537 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975610 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975636 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975652 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975740 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:43 crc kubenswrapper[4869]: I0106 14:20:43.975768 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxsd7\" (UniqueName: \"kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.076320 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077529 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077660 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077769 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077980 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxsd7\" (UniqueName: \"kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.077403 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.082557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.083053 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.083996 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.096172 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.099322 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxsd7\" (UniqueName: \"kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7\") pod \"nova-api-0\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.251183 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.758483 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:44 crc kubenswrapper[4869]: W0106 14:20:44.760451 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0216cbdc_86f2_4588_ac9f_ad9a814a233a.slice/crio-1ff88052e210f6859d5ef5c2953493582dbc90b78f887d6317970cf483cb017d WatchSource:0}: Error finding container 1ff88052e210f6859d5ef5c2953493582dbc90b78f887d6317970cf483cb017d: Status 404 returned error can't find the container with id 1ff88052e210f6859d5ef5c2953493582dbc90b78f887d6317970cf483cb017d Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.780588 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"60e4a14d85b0924b8a1489568f909c8249d7bc8d624969226ccabd333a1fdf7b"} Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.782433 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerStarted","Data":"1ff88052e210f6859d5ef5c2953493582dbc90b78f887d6317970cf483cb017d"} Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.918265 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:44 crc kubenswrapper[4869]: I0106 14:20:44.954198 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.721933 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4046ec6f-300f-4060-986a-e7fdbb596003" path="/var/lib/kubelet/pods/4046ec6f-300f-4060-986a-e7fdbb596003/volumes" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.791773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"9d088d19888710f89d64b1ed942eac6280b9b59a7ee54f1a0ffb3bfef90e8bfd"} Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.794352 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerStarted","Data":"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a"} Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.794387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerStarted","Data":"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4"} Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.819327 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8192939790000002 podStartE2EDuration="2.819293979s" podCreationTimestamp="2026-01-06 14:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:45.814591783 +0000 UTC m=+1264.354279467" watchObservedRunningTime="2026-01-06 14:20:45.819293979 +0000 UTC m=+1264.358981643" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.821209 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.959325 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l9qkx"] Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.961208 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.964945 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.966127 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 06 14:20:45 crc kubenswrapper[4869]: I0106 14:20:45.991246 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9qkx"] Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.118488 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.118778 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.118872 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.119085 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krw87\" (UniqueName: \"kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.220741 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krw87\" (UniqueName: \"kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.221078 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.221125 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.221158 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.233692 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.234244 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.235426 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.239129 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krw87\" (UniqueName: \"kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87\") pod \"nova-cell1-cell-mapping-l9qkx\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.288086 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.768649 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9qkx"] Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.805576 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"a67d53a3807e81490173f325bbf15c2d13c54e6dcaab13f4bf4aa88d742e385f"} Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.805757 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.806959 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9qkx" event={"ID":"0c6f6bc0-798b-494a-96d0-a27db4a8acf6","Type":"ContainerStarted","Data":"0458db8e925e1203aa914c0ddb484af94537874c279b2b4ce16e79e89ac9954d"} Jan 06 14:20:46 crc kubenswrapper[4869]: I0106 14:20:46.832871 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.083317049 podStartE2EDuration="6.83284914s" podCreationTimestamp="2026-01-06 14:20:40 +0000 UTC" firstStartedPulling="2026-01-06 14:20:41.473280327 +0000 UTC m=+1260.012968021" lastFinishedPulling="2026-01-06 14:20:46.222812448 +0000 UTC m=+1264.762500112" observedRunningTime="2026-01-06 14:20:46.828572274 +0000 UTC m=+1265.368259938" watchObservedRunningTime="2026-01-06 14:20:46.83284914 +0000 UTC m=+1265.372536804" Jan 06 14:20:47 crc kubenswrapper[4869]: I0106 14:20:47.084084 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:20:47 crc kubenswrapper[4869]: I0106 14:20:47.144722 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:20:47 crc kubenswrapper[4869]: I0106 14:20:47.145306 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="dnsmasq-dns" containerID="cri-o://afff593e83ab978b1523ae56b8844ade234551daa06e1595246b702ad7da5b83" gracePeriod=10 Jan 06 14:20:47 crc kubenswrapper[4869]: I0106 14:20:47.630995 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: connect: connection refused" Jan 06 14:20:48 crc kubenswrapper[4869]: I0106 14:20:48.823432 4869 generic.go:334] "Generic (PLEG): container finished" podID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerID="afff593e83ab978b1523ae56b8844ade234551daa06e1595246b702ad7da5b83" exitCode=0 Jan 06 14:20:48 crc kubenswrapper[4869]: I0106 14:20:48.823512 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" event={"ID":"fbd4a1a6-68dc-473b-875f-f55c1fbac887","Type":"ContainerDied","Data":"afff593e83ab978b1523ae56b8844ade234551daa06e1595246b702ad7da5b83"} Jan 06 14:20:48 crc kubenswrapper[4869]: I0106 14:20:48.825179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9qkx" event={"ID":"0c6f6bc0-798b-494a-96d0-a27db4a8acf6","Type":"ContainerStarted","Data":"21075043b994dc8e25b5abed974d30d7b0637788f4ab412e99ecb4001081a32b"} Jan 06 14:20:48 crc kubenswrapper[4869]: I0106 14:20:48.840287 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l9qkx" podStartSLOduration=3.840264015 podStartE2EDuration="3.840264015s" podCreationTimestamp="2026-01-06 14:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:20:48.838932102 +0000 UTC m=+1267.378619796" watchObservedRunningTime="2026-01-06 14:20:48.840264015 +0000 UTC m=+1267.379951689" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.494006 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.696875 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb\") pod \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.696970 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb\") pod \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.697027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dh5v\" (UniqueName: \"kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v\") pod \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.697276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc\") pod \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.697322 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config\") pod \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\" (UID: \"fbd4a1a6-68dc-473b-875f-f55c1fbac887\") " Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.711055 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v" (OuterVolumeSpecName: "kube-api-access-6dh5v") pod "fbd4a1a6-68dc-473b-875f-f55c1fbac887" (UID: "fbd4a1a6-68dc-473b-875f-f55c1fbac887"). InnerVolumeSpecName "kube-api-access-6dh5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.750051 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fbd4a1a6-68dc-473b-875f-f55c1fbac887" (UID: "fbd4a1a6-68dc-473b-875f-f55c1fbac887"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.752324 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fbd4a1a6-68dc-473b-875f-f55c1fbac887" (UID: "fbd4a1a6-68dc-473b-875f-f55c1fbac887"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.765452 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fbd4a1a6-68dc-473b-875f-f55c1fbac887" (UID: "fbd4a1a6-68dc-473b-875f-f55c1fbac887"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.773778 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config" (OuterVolumeSpecName: "config") pod "fbd4a1a6-68dc-473b-875f-f55c1fbac887" (UID: "fbd4a1a6-68dc-473b-875f-f55c1fbac887"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.799134 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.799167 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.799176 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.799189 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fbd4a1a6-68dc-473b-875f-f55c1fbac887-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.799198 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dh5v\" (UniqueName: \"kubernetes.io/projected/fbd4a1a6-68dc-473b-875f-f55c1fbac887-kube-api-access-6dh5v\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.835381 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.841074 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-2p6mk" event={"ID":"fbd4a1a6-68dc-473b-875f-f55c1fbac887","Type":"ContainerDied","Data":"798b433eb105e6a694887258683fec1a16f25d639e0d18e554e532d5752a5189"} Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.841144 4869 scope.go:117] "RemoveContainer" containerID="afff593e83ab978b1523ae56b8844ade234551daa06e1595246b702ad7da5b83" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.873553 4869 scope.go:117] "RemoveContainer" containerID="59b5f8adf014bbe42fdc56da428808cb285d0b13f36ea41b8f58d6cb7a329766" Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.876606 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:20:49 crc kubenswrapper[4869]: I0106 14:20:49.886232 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-2p6mk"] Jan 06 14:20:51 crc kubenswrapper[4869]: I0106 14:20:51.721095 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" path="/var/lib/kubelet/pods/fbd4a1a6-68dc-473b-875f-f55c1fbac887/volumes" Jan 06 14:20:53 crc kubenswrapper[4869]: I0106 14:20:53.885421 4869 generic.go:334] "Generic (PLEG): container finished" podID="0c6f6bc0-798b-494a-96d0-a27db4a8acf6" containerID="21075043b994dc8e25b5abed974d30d7b0637788f4ab412e99ecb4001081a32b" exitCode=0 Jan 06 14:20:53 crc kubenswrapper[4869]: I0106 14:20:53.885500 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9qkx" event={"ID":"0c6f6bc0-798b-494a-96d0-a27db4a8acf6","Type":"ContainerDied","Data":"21075043b994dc8e25b5abed974d30d7b0637788f4ab412e99ecb4001081a32b"} Jan 06 14:20:54 crc kubenswrapper[4869]: I0106 14:20:54.251720 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:54 crc kubenswrapper[4869]: I0106 14:20:54.251782 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.241510 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.269376 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.269728 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.286220 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts\") pod \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.286395 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle\") pod \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.286424 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data\") pod \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.286474 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krw87\" (UniqueName: \"kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87\") pod \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\" (UID: \"0c6f6bc0-798b-494a-96d0-a27db4a8acf6\") " Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.291936 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87" (OuterVolumeSpecName: "kube-api-access-krw87") pod "0c6f6bc0-798b-494a-96d0-a27db4a8acf6" (UID: "0c6f6bc0-798b-494a-96d0-a27db4a8acf6"). InnerVolumeSpecName "kube-api-access-krw87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.292842 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts" (OuterVolumeSpecName: "scripts") pod "0c6f6bc0-798b-494a-96d0-a27db4a8acf6" (UID: "0c6f6bc0-798b-494a-96d0-a27db4a8acf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.311823 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c6f6bc0-798b-494a-96d0-a27db4a8acf6" (UID: "0c6f6bc0-798b-494a-96d0-a27db4a8acf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.313827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data" (OuterVolumeSpecName: "config-data") pod "0c6f6bc0-798b-494a-96d0-a27db4a8acf6" (UID: "0c6f6bc0-798b-494a-96d0-a27db4a8acf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.388705 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.389000 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.389013 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krw87\" (UniqueName: \"kubernetes.io/projected/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-kube-api-access-krw87\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.389026 4869 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c6f6bc0-798b-494a-96d0-a27db4a8acf6-scripts\") on node \"crc\" DevicePath \"\"" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.915439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l9qkx" event={"ID":"0c6f6bc0-798b-494a-96d0-a27db4a8acf6","Type":"ContainerDied","Data":"0458db8e925e1203aa914c0ddb484af94537874c279b2b4ce16e79e89ac9954d"} Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.915785 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0458db8e925e1203aa914c0ddb484af94537874c279b2b4ce16e79e89ac9954d" Jan 06 14:20:55 crc kubenswrapper[4869]: I0106 14:20:55.915920 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l9qkx" Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.114693 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.115336 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-log" containerID="cri-o://039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4" gracePeriod=30 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.115428 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-api" containerID="cri-o://643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a" gracePeriod=30 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.131735 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.131986 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerName="nova-scheduler-scheduler" containerID="cri-o://1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" gracePeriod=30 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.143503 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.144746 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" containerID="cri-o://279505af7c7bb12711e79c1388a42c8ed7a1fa97870fb9f768586825d99d73b7" gracePeriod=30 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.145244 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" containerID="cri-o://b94e02ab8799ec6b488c82121652307735464c599e4f2cc1c1153f3d4ab09509" gracePeriod=30 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.923837 4869 generic.go:334] "Generic (PLEG): container finished" podID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerID="279505af7c7bb12711e79c1388a42c8ed7a1fa97870fb9f768586825d99d73b7" exitCode=143 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.924199 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerDied","Data":"279505af7c7bb12711e79c1388a42c8ed7a1fa97870fb9f768586825d99d73b7"} Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.927079 4869 generic.go:334] "Generic (PLEG): container finished" podID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerID="039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4" exitCode=143 Jan 06 14:20:56 crc kubenswrapper[4869]: I0106 14:20:56.927110 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerDied","Data":"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4"} Jan 06 14:21:00 crc kubenswrapper[4869]: I0106 14:21:00.585923 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": dial tcp 10.217.0.178:8775: connect: connection refused" Jan 06 14:21:00 crc kubenswrapper[4869]: I0106 14:21:00.585983 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.178:8775/\": dial tcp 10.217.0.178:8775: connect: connection refused" Jan 06 14:21:00 crc kubenswrapper[4869]: E0106 14:21:00.726017 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7 is running failed: container process not found" containerID="1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:21:00 crc kubenswrapper[4869]: E0106 14:21:00.726736 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7 is running failed: container process not found" containerID="1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:21:00 crc kubenswrapper[4869]: E0106 14:21:00.727630 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7 is running failed: container process not found" containerID="1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 06 14:21:00 crc kubenswrapper[4869]: E0106 14:21:00.727752 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerName="nova-scheduler-scheduler" Jan 06 14:21:00 crc kubenswrapper[4869]: I0106 14:21:00.994882 4869 generic.go:334] "Generic (PLEG): container finished" podID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerID="1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" exitCode=0 Jan 06 14:21:00 crc kubenswrapper[4869]: I0106 14:21:00.995014 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f4c37f2-9138-43a0-af83-89ceee8c250e","Type":"ContainerDied","Data":"1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7"} Jan 06 14:21:00 crc kubenswrapper[4869]: I0106 14:21:00.999872 4869 generic.go:334] "Generic (PLEG): container finished" podID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerID="b94e02ab8799ec6b488c82121652307735464c599e4f2cc1c1153f3d4ab09509" exitCode=0 Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:00.999926 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerDied","Data":"b94e02ab8799ec6b488c82121652307735464c599e4f2cc1c1153f3d4ab09509"} Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.541785 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.567784 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.632577 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data\") pod \"5f4c37f2-9138-43a0-af83-89ceee8c250e\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.632704 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle\") pod \"5f4c37f2-9138-43a0-af83-89ceee8c250e\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.632806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p7bf\" (UniqueName: \"kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf\") pod \"5f4c37f2-9138-43a0-af83-89ceee8c250e\" (UID: \"5f4c37f2-9138-43a0-af83-89ceee8c250e\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.638917 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf" (OuterVolumeSpecName: "kube-api-access-5p7bf") pod "5f4c37f2-9138-43a0-af83-89ceee8c250e" (UID: "5f4c37f2-9138-43a0-af83-89ceee8c250e"). InnerVolumeSpecName "kube-api-access-5p7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.660272 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f4c37f2-9138-43a0-af83-89ceee8c250e" (UID: "5f4c37f2-9138-43a0-af83-89ceee8c250e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.663762 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data" (OuterVolumeSpecName: "config-data") pod "5f4c37f2-9138-43a0-af83-89ceee8c250e" (UID: "5f4c37f2-9138-43a0-af83-89ceee8c250e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.734884 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.734947 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.734999 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.735054 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.735132 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.735257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxsd7\" (UniqueName: \"kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7\") pod \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\" (UID: \"0216cbdc-86f2-4588-ac9f-ad9a814a233a\") " Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.735887 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs" (OuterVolumeSpecName: "logs") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.735825 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.736355 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f4c37f2-9138-43a0-af83-89ceee8c250e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.736369 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p7bf\" (UniqueName: \"kubernetes.io/projected/5f4c37f2-9138-43a0-af83-89ceee8c250e-kube-api-access-5p7bf\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.739861 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7" (OuterVolumeSpecName: "kube-api-access-kxsd7") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "kube-api-access-kxsd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.762368 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data" (OuterVolumeSpecName: "config-data") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.772625 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.799406 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.815457 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0216cbdc-86f2-4588-ac9f-ad9a814a233a" (UID: "0216cbdc-86f2-4588-ac9f-ad9a814a233a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838377 4869 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838427 4869 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838442 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838453 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0216cbdc-86f2-4588-ac9f-ad9a814a233a-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838465 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0216cbdc-86f2-4588-ac9f-ad9a814a233a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.838477 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxsd7\" (UniqueName: \"kubernetes.io/projected/0216cbdc-86f2-4588-ac9f-ad9a814a233a-kube-api-access-kxsd7\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:01 crc kubenswrapper[4869]: I0106 14:21:01.899236 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.011847 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.011833 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5f4c37f2-9138-43a0-af83-89ceee8c250e","Type":"ContainerDied","Data":"67b5c7a4d1b768444664725b4bdb8e5ae313c3db1a1d4a71a149931026fd5931"} Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.011974 4869 scope.go:117] "RemoveContainer" containerID="1019b27a570d545d48e2eed6e24e6198529bbb9623db7569d839a0c50ad751c7" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.014958 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4be226ea-ef19-4fbe-8a12-d72cef21b03c","Type":"ContainerDied","Data":"9f0a0fafb17183044a95080e28ca53e5816d88389f66d87e613bf2af90385d70"} Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.015038 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.020816 4869 generic.go:334] "Generic (PLEG): container finished" podID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerID="643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a" exitCode=0 Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.020876 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerDied","Data":"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a"} Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.020902 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0216cbdc-86f2-4588-ac9f-ad9a814a233a","Type":"ContainerDied","Data":"1ff88052e210f6859d5ef5c2953493582dbc90b78f887d6317970cf483cb017d"} Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.020935 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.045389 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.054802 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.059310 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data\") pod \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.059618 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv6d\" (UniqueName: \"kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d\") pod \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.059817 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs\") pod \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.060039 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs\") pod \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.060165 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle\") pod \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\" (UID: \"4be226ea-ef19-4fbe-8a12-d72cef21b03c\") " Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.064807 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs" (OuterVolumeSpecName: "logs") pod "4be226ea-ef19-4fbe-8a12-d72cef21b03c" (UID: "4be226ea-ef19-4fbe-8a12-d72cef21b03c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.067862 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068401 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="dnsmasq-dns" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068422 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="dnsmasq-dns" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068444 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="init" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068452 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="init" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068467 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068475 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068487 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-log" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068494 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-log" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068507 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-api" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068516 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-api" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068531 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6f6bc0-798b-494a-96d0-a27db4a8acf6" containerName="nova-manage" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068538 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6f6bc0-798b-494a-96d0-a27db4a8acf6" containerName="nova-manage" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068562 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068569 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.068582 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerName="nova-scheduler-scheduler" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068589 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerName="nova-scheduler-scheduler" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068803 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-api" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068818 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6f6bc0-798b-494a-96d0-a27db4a8acf6" containerName="nova-manage" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068830 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-log" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068848 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" containerName="nova-metadata-metadata" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068858 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd4a1a6-68dc-473b-875f-f55c1fbac887" containerName="dnsmasq-dns" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068869 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" containerName="nova-scheduler-scheduler" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.068883 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" containerName="nova-api-log" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.069619 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.070924 4869 scope.go:117] "RemoveContainer" containerID="b94e02ab8799ec6b488c82121652307735464c599e4f2cc1c1153f3d4ab09509" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.073755 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.090628 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.095656 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d" (OuterVolumeSpecName: "kube-api-access-fpv6d") pod "4be226ea-ef19-4fbe-8a12-d72cef21b03c" (UID: "4be226ea-ef19-4fbe-8a12-d72cef21b03c"). InnerVolumeSpecName "kube-api-access-fpv6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.105979 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be226ea-ef19-4fbe-8a12-d72cef21b03c" (UID: "4be226ea-ef19-4fbe-8a12-d72cef21b03c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.110468 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.121069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.124877 4869 scope.go:117] "RemoveContainer" containerID="279505af7c7bb12711e79c1388a42c8ed7a1fa97870fb9f768586825d99d73b7" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.128282 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.137572 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.143026 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.144263 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.144438 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.150335 4869 scope.go:117] "RemoveContainer" containerID="643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.153379 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data" (OuterVolumeSpecName: "config-data") pod "4be226ea-ef19-4fbe-8a12-d72cef21b03c" (UID: "4be226ea-ef19-4fbe-8a12-d72cef21b03c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.165660 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r5cj\" (UniqueName: \"kubernetes.io/projected/7614e382-a4ea-473a-bb59-4cf065777f95-kube-api-access-8r5cj\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.166138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.166330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-config-data\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.166934 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.166982 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpv6d\" (UniqueName: \"kubernetes.io/projected/4be226ea-ef19-4fbe-8a12-d72cef21b03c-kube-api-access-fpv6d\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.166998 4869 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be226ea-ef19-4fbe-8a12-d72cef21b03c-logs\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.167008 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.177431 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.182539 4869 scope.go:117] "RemoveContainer" containerID="039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.189592 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4be226ea-ef19-4fbe-8a12-d72cef21b03c" (UID: "4be226ea-ef19-4fbe-8a12-d72cef21b03c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.209614 4869 scope.go:117] "RemoveContainer" containerID="643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.209996 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a\": container with ID starting with 643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a not found: ID does not exist" containerID="643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.210033 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a"} err="failed to get container status \"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a\": rpc error: code = NotFound desc = could not find container \"643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a\": container with ID starting with 643e5947c5439992c0ed260662eeb535c3d17507372c86a6793ed04d2334754a not found: ID does not exist" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.210058 4869 scope.go:117] "RemoveContainer" containerID="039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4" Jan 06 14:21:02 crc kubenswrapper[4869]: E0106 14:21:02.210308 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4\": container with ID starting with 039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4 not found: ID does not exist" containerID="039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.210344 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4"} err="failed to get container status \"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4\": rpc error: code = NotFound desc = could not find container \"039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4\": container with ID starting with 039d33771985a60f67b8ba62dbed8f7a23a3a33341f3ed700241c32f7d4954b4 not found: ID does not exist" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268283 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-config-data\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268511 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268570 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268615 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-config-data\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268728 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268845 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpkwm\" (UniqueName: \"kubernetes.io/projected/dc51cbf9-373a-4c1d-9314-d382cab1b09f-kube-api-access-fpkwm\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268877 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268938 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r5cj\" (UniqueName: \"kubernetes.io/projected/7614e382-a4ea-473a-bb59-4cf065777f95-kube-api-access-8r5cj\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.268986 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51cbf9-373a-4c1d-9314-d382cab1b09f-logs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.269235 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4be226ea-ef19-4fbe-8a12-d72cef21b03c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.273198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.274099 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7614e382-a4ea-473a-bb59-4cf065777f95-config-data\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.286316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r5cj\" (UniqueName: \"kubernetes.io/projected/7614e382-a4ea-473a-bb59-4cf065777f95-kube-api-access-8r5cj\") pod \"nova-scheduler-0\" (UID: \"7614e382-a4ea-473a-bb59-4cf065777f95\") " pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.349025 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.366587 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370300 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51cbf9-373a-4c1d-9314-d382cab1b09f-logs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370392 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-config-data\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370462 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370591 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpkwm\" (UniqueName: \"kubernetes.io/projected/dc51cbf9-373a-4c1d-9314-d382cab1b09f-kube-api-access-fpkwm\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370619 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.370818 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc51cbf9-373a-4c1d-9314-d382cab1b09f-logs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.376838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.377188 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-config-data\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.382600 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.384480 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.385690 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-public-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.389076 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc51cbf9-373a-4c1d-9314-d382cab1b09f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.389313 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.389513 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.393531 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpkwm\" (UniqueName: \"kubernetes.io/projected/dc51cbf9-373a-4c1d-9314-d382cab1b09f-kube-api-access-fpkwm\") pod \"nova-api-0\" (UID: \"dc51cbf9-373a-4c1d-9314-d382cab1b09f\") " pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.395607 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.431062 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.467890 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.476276 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.476374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3bfe55-109b-43b1-9cde-46333d3d826d-logs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.476424 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-config-data\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.476466 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.476510 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6ltl\" (UniqueName: \"kubernetes.io/projected/ec3bfe55-109b-43b1-9cde-46333d3d826d-kube-api-access-h6ltl\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.577785 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-config-data\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.577835 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.577881 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6ltl\" (UniqueName: \"kubernetes.io/projected/ec3bfe55-109b-43b1-9cde-46333d3d826d-kube-api-access-h6ltl\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.577917 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.577973 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3bfe55-109b-43b1-9cde-46333d3d826d-logs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.578299 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec3bfe55-109b-43b1-9cde-46333d3d826d-logs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.584653 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.584882 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-config-data\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.585949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec3bfe55-109b-43b1-9cde-46333d3d826d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.602327 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6ltl\" (UniqueName: \"kubernetes.io/projected/ec3bfe55-109b-43b1-9cde-46333d3d826d-kube-api-access-h6ltl\") pod \"nova-metadata-0\" (UID: \"ec3bfe55-109b-43b1-9cde-46333d3d826d\") " pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.701949 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 06 14:21:02 crc kubenswrapper[4869]: W0106 14:21:02.861479 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7614e382_a4ea_473a_bb59_4cf065777f95.slice/crio-be153865f6c0868d8c8cdc569a34d996c65f2bd64239b8c3248c7502631d8fc4 WatchSource:0}: Error finding container be153865f6c0868d8c8cdc569a34d996c65f2bd64239b8c3248c7502631d8fc4: Status 404 returned error can't find the container with id be153865f6c0868d8c8cdc569a34d996c65f2bd64239b8c3248c7502631d8fc4 Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.862119 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: I0106 14:21:02.991599 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 06 14:21:02 crc kubenswrapper[4869]: W0106 14:21:02.992280 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc51cbf9_373a_4c1d_9314_d382cab1b09f.slice/crio-c8c37051591693188fd81ec81a75d9c6e4bb972469b081a3e2fb70c7fb1e937f WatchSource:0}: Error finding container c8c37051591693188fd81ec81a75d9c6e4bb972469b081a3e2fb70c7fb1e937f: Status 404 returned error can't find the container with id c8c37051591693188fd81ec81a75d9c6e4bb972469b081a3e2fb70c7fb1e937f Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.042773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc51cbf9-373a-4c1d-9314-d382cab1b09f","Type":"ContainerStarted","Data":"c8c37051591693188fd81ec81a75d9c6e4bb972469b081a3e2fb70c7fb1e937f"} Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.044655 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7614e382-a4ea-473a-bb59-4cf065777f95","Type":"ContainerStarted","Data":"be153865f6c0868d8c8cdc569a34d996c65f2bd64239b8c3248c7502631d8fc4"} Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.194874 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.622289 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.622886 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.718244 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0216cbdc-86f2-4588-ac9f-ad9a814a233a" path="/var/lib/kubelet/pods/0216cbdc-86f2-4588-ac9f-ad9a814a233a/volumes" Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.719296 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be226ea-ef19-4fbe-8a12-d72cef21b03c" path="/var/lib/kubelet/pods/4be226ea-ef19-4fbe-8a12-d72cef21b03c/volumes" Jan 06 14:21:03 crc kubenswrapper[4869]: I0106 14:21:03.720966 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f4c37f2-9138-43a0-af83-89ceee8c250e" path="/var/lib/kubelet/pods/5f4c37f2-9138-43a0-af83-89ceee8c250e/volumes" Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.079898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7614e382-a4ea-473a-bb59-4cf065777f95","Type":"ContainerStarted","Data":"7e948a7e128d63da8b35c9258b1264474c79008eada7b84cf9f0c7edf6bd691c"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.086173 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc51cbf9-373a-4c1d-9314-d382cab1b09f","Type":"ContainerStarted","Data":"3b10789c83fd16d5670e8bb34505133205de4f8ec05e4c9088a88a67f34bfa21"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.086217 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dc51cbf9-373a-4c1d-9314-d382cab1b09f","Type":"ContainerStarted","Data":"813e755703876c3b99cf2a08c18f51b872f35943fd045adce91cbd83cdca2d4f"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.089841 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec3bfe55-109b-43b1-9cde-46333d3d826d","Type":"ContainerStarted","Data":"b3aa7c5f01326e262af50e2934dbcd65929ec7520b54c3b0331d5987dcafb510"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.089897 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec3bfe55-109b-43b1-9cde-46333d3d826d","Type":"ContainerStarted","Data":"0e183daacc5e04e57c0bbeb5e955b7b636eb4cf77de5fe684389e48f3e96ff32"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.089910 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec3bfe55-109b-43b1-9cde-46333d3d826d","Type":"ContainerStarted","Data":"3d62030b47f48287507630bd5ce2abe4f03bcc46480161cbe32a57f693b87614"} Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.116410 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.116383031 podStartE2EDuration="2.116383031s" podCreationTimestamp="2026-01-06 14:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:21:04.103958444 +0000 UTC m=+1282.643646108" watchObservedRunningTime="2026-01-06 14:21:04.116383031 +0000 UTC m=+1282.656070695" Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.152073 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.152047502 podStartE2EDuration="2.152047502s" podCreationTimestamp="2026-01-06 14:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:21:04.147045218 +0000 UTC m=+1282.686732892" watchObservedRunningTime="2026-01-06 14:21:04.152047502 +0000 UTC m=+1282.691735166" Jan 06 14:21:04 crc kubenswrapper[4869]: I0106 14:21:04.153111 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.153102948 podStartE2EDuration="2.153102948s" podCreationTimestamp="2026-01-06 14:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:21:04.127923536 +0000 UTC m=+1282.667611230" watchObservedRunningTime="2026-01-06 14:21:04.153102948 +0000 UTC m=+1282.692790612" Jan 06 14:21:07 crc kubenswrapper[4869]: I0106 14:21:07.431946 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 06 14:21:07 crc kubenswrapper[4869]: I0106 14:21:07.703037 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 06 14:21:07 crc kubenswrapper[4869]: I0106 14:21:07.703853 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 06 14:21:11 crc kubenswrapper[4869]: I0106 14:21:11.036282 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.431768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.460914 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.470126 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.470170 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.702333 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 06 14:21:12 crc kubenswrapper[4869]: I0106 14:21:12.702389 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 06 14:21:13 crc kubenswrapper[4869]: I0106 14:21:13.194915 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 06 14:21:13 crc kubenswrapper[4869]: I0106 14:21:13.498773 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc51cbf9-373a-4c1d-9314-d382cab1b09f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:21:13 crc kubenswrapper[4869]: I0106 14:21:13.498773 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="dc51cbf9-373a-4c1d-9314-d382cab1b09f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:21:13 crc kubenswrapper[4869]: I0106 14:21:13.722860 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec3bfe55-109b-43b1-9cde-46333d3d826d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:21:13 crc kubenswrapper[4869]: I0106 14:21:13.723222 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec3bfe55-109b-43b1-9cde-46333d3d826d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.478530 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.479211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.480553 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.480588 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.485803 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.492290 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.706794 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.707287 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 06 14:21:22 crc kubenswrapper[4869]: I0106 14:21:22.712481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 06 14:21:23 crc kubenswrapper[4869]: I0106 14:21:23.259577 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 06 14:21:31 crc kubenswrapper[4869]: I0106 14:21:31.387593 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:32 crc kubenswrapper[4869]: I0106 14:21:32.320018 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:33 crc kubenswrapper[4869]: I0106 14:21:33.622489 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:21:33 crc kubenswrapper[4869]: I0106 14:21:33.622583 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:21:35 crc kubenswrapper[4869]: I0106 14:21:35.851871 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="rabbitmq" containerID="cri-o://fff04b7bdaac883fa42e70a3f4f3479f087735d8328b74bd9aec6635e624421d" gracePeriod=604796 Jan 06 14:21:37 crc kubenswrapper[4869]: I0106 14:21:37.548204 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="rabbitmq" containerID="cri-o://4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4" gracePeriod=604795 Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.426067 4869 generic.go:334] "Generic (PLEG): container finished" podID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerID="fff04b7bdaac883fa42e70a3f4f3479f087735d8328b74bd9aec6635e624421d" exitCode=0 Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.426131 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerDied","Data":"fff04b7bdaac883fa42e70a3f4f3479f087735d8328b74bd9aec6635e624421d"} Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.504275 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.668910 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.668966 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.668984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669021 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs97c\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669090 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669174 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669229 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669274 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669296 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.669325 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\" (UID: \"a54155a0-94ff-4519-81e3-68a0bb1b62b6\") " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.672185 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.672618 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.673192 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.675999 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.676277 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.677481 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c" (OuterVolumeSpecName: "kube-api-access-xs97c") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "kube-api-access-xs97c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.678104 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.681125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info" (OuterVolumeSpecName: "pod-info") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.699248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data" (OuterVolumeSpecName: "config-data") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.723071 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf" (OuterVolumeSpecName: "server-conf") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.772920 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773403 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773440 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773455 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773469 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773480 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs97c\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-kube-api-access-xs97c\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773696 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a54155a0-94ff-4519-81e3-68a0bb1b62b6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773711 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a54155a0-94ff-4519-81e3-68a0bb1b62b6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773722 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-server-conf\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.773734 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a54155a0-94ff-4519-81e3-68a0bb1b62b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.793812 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a54155a0-94ff-4519-81e3-68a0bb1b62b6" (UID: "a54155a0-94ff-4519-81e3-68a0bb1b62b6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.796057 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.874937 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:42 crc kubenswrapper[4869]: I0106 14:21:42.874971 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a54155a0-94ff-4519-81e3-68a0bb1b62b6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.438132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a54155a0-94ff-4519-81e3-68a0bb1b62b6","Type":"ContainerDied","Data":"41971692dd8b3b01daa8fe8f88e7596c1f6cdf0b79ec8c3218e1501587892016"} Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.438196 4869 scope.go:117] "RemoveContainer" containerID="fff04b7bdaac883fa42e70a3f4f3479f087735d8328b74bd9aec6635e624421d" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.438233 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.465015 4869 scope.go:117] "RemoveContainer" containerID="b4f041e0f9531bfc6e45c8260345558c6a4ae855d64eb9a899f864051195a0a2" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.496272 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.516155 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.523178 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:43 crc kubenswrapper[4869]: E0106 14:21:43.523523 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="setup-container" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.523542 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="setup-container" Jan 06 14:21:43 crc kubenswrapper[4869]: E0106 14:21:43.523560 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="rabbitmq" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.523567 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="rabbitmq" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.523743 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" containerName="rabbitmq" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.524919 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.526972 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.526990 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.527605 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.528028 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.530114 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.531262 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-m222g" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.531624 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.541259 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.698963 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699025 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699055 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699128 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7fc\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-kube-api-access-nt7fc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699193 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699226 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699250 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699401 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699597 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-config-data\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699794 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.699888 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.718315 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54155a0-94ff-4519-81e3-68a0bb1b62b6" path="/var/lib/kubelet/pods/a54155a0-94ff-4519-81e3-68a0bb1b62b6/volumes" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.801735 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.802989 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803055 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803092 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803159 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7fc\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-kube-api-access-nt7fc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803228 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803312 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803402 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-config-data\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803475 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.803701 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.804411 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.804704 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.804833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.805175 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.805326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-config-data\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.811345 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.813556 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.814095 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.817895 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.826872 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7fc\" (UniqueName: \"kubernetes.io/projected/84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46-kube-api-access-nt7fc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:43 crc kubenswrapper[4869]: I0106 14:21:43.872981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46\") " pod="openstack/rabbitmq-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.071242 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.170031 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.223272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.223964 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.224008 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.224037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.224972 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225100 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5bb\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225242 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225267 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225597 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225648 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.225897 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.226025 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.226090 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins\") pod \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\" (UID: \"ae2b9cdc-8940-4aeb-bea8-fac416d93eed\") " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.227324 4869 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.227350 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.229626 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.231679 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.232356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.234574 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb" (OuterVolumeSpecName: "kube-api-access-rf5bb") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "kube-api-access-rf5bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.234696 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info" (OuterVolumeSpecName: "pod-info") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.244853 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.263528 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data" (OuterVolumeSpecName: "config-data") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.291970 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf" (OuterVolumeSpecName: "server-conf") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328868 4869 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328899 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328908 4869 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-pod-info\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328941 4869 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328952 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf5bb\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-kube-api-access-rf5bb\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328962 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328972 4869 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-server-conf\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.328981 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.349568 4869 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.407168 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ae2b9cdc-8940-4aeb-bea8-fac416d93eed" (UID: "ae2b9cdc-8940-4aeb-bea8-fac416d93eed"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.430547 4869 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.430582 4869 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae2b9cdc-8940-4aeb-bea8-fac416d93eed-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.464449 4869 generic.go:334] "Generic (PLEG): container finished" podID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerID="4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4" exitCode=0 Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.464525 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerDied","Data":"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4"} Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.464560 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ae2b9cdc-8940-4aeb-bea8-fac416d93eed","Type":"ContainerDied","Data":"52f1f7c403694c3b1d3b4b841c0dc136c6ed81bcddb5602d091205665ebd6b20"} Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.464594 4869 scope.go:117] "RemoveContainer" containerID="4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.465054 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.509072 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.510141 4869 scope.go:117] "RemoveContainer" containerID="426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.521744 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.543762 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:44 crc kubenswrapper[4869]: E0106 14:21:44.544196 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="rabbitmq" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.544209 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="rabbitmq" Jan 06 14:21:44 crc kubenswrapper[4869]: E0106 14:21:44.544234 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="setup-container" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.544240 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="setup-container" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.544414 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" containerName="rabbitmq" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.545437 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.546288 4869 scope.go:117] "RemoveContainer" containerID="4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4" Jan 06 14:21:44 crc kubenswrapper[4869]: E0106 14:21:44.548836 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4\": container with ID starting with 4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4 not found: ID does not exist" containerID="4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.548882 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4"} err="failed to get container status \"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4\": rpc error: code = NotFound desc = could not find container \"4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4\": container with ID starting with 4e32ceea8482595843d531b398fd7bb8b0756b75344ed0c66cd64e8ce0ac81d4 not found: ID does not exist" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.548906 4869 scope.go:117] "RemoveContainer" containerID="426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549154 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549251 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-c7c5n" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549278 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549423 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 06 14:21:44 crc kubenswrapper[4869]: E0106 14:21:44.549419 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647\": container with ID starting with 426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647 not found: ID does not exist" containerID="426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549612 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647"} err="failed to get container status \"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647\": rpc error: code = NotFound desc = could not find container \"426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647\": container with ID starting with 426b388f0985e54a80a1a58cff04cdc2d24f72f605fe5010a05fc3ccdc5cb647 not found: ID does not exist" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549552 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549598 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.549619 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.567695 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.672553 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735445 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8efa0859-d909-40a3-8868-2cee1b98f0dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735499 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8efa0859-d909-40a3-8868-2cee1b98f0dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735574 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735645 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5dl\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-kube-api-access-hh5dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735712 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735745 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735814 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735844 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735892 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.735930 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838407 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838573 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838642 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8efa0859-d909-40a3-8868-2cee1b98f0dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838670 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8efa0859-d909-40a3-8868-2cee1b98f0dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838761 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838791 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh5dl\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-kube-api-access-hh5dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838878 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.838923 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.839181 4869 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.839791 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.840409 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.840829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.840949 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.841572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8efa0859-d909-40a3-8868-2cee1b98f0dd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.844709 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.844838 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8efa0859-d909-40a3-8868-2cee1b98f0dd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.845027 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.845719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8efa0859-d909-40a3-8868-2cee1b98f0dd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.856457 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh5dl\" (UniqueName: \"kubernetes.io/projected/8efa0859-d909-40a3-8868-2cee1b98f0dd-kube-api-access-hh5dl\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:44 crc kubenswrapper[4869]: I0106 14:21:44.885570 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8efa0859-d909-40a3-8868-2cee1b98f0dd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.165808 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.473530 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46","Type":"ContainerStarted","Data":"4b14f4e9a9d36a32636633a39bdb8455f134a1bf75456f361e731b8719a9a8a5"} Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.646900 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 06 14:21:45 crc kubenswrapper[4869]: W0106 14:21:45.661516 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8efa0859_d909_40a3_8868_2cee1b98f0dd.slice/crio-59715e2bd7994b4d689ef0107ad6d48d9f6a9a71f2590da96976792c8d6ddd0b WatchSource:0}: Error finding container 59715e2bd7994b4d689ef0107ad6d48d9f6a9a71f2590da96976792c8d6ddd0b: Status 404 returned error can't find the container with id 59715e2bd7994b4d689ef0107ad6d48d9f6a9a71f2590da96976792c8d6ddd0b Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.719810 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae2b9cdc-8940-4aeb-bea8-fac416d93eed" path="/var/lib/kubelet/pods/ae2b9cdc-8940-4aeb-bea8-fac416d93eed/volumes" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.797268 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.799611 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.801646 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.820270 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959450 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959518 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959591 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxrwv\" (UniqueName: \"kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959759 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959821 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:45 crc kubenswrapper[4869]: I0106 14:21:45.959865 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.061905 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.061978 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.062036 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.062058 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.062094 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxrwv\" (UniqueName: \"kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.062536 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.063270 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.063324 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.063343 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.063400 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.063882 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.162347 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxrwv\" (UniqueName: \"kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv\") pod \"dnsmasq-dns-6447ccbd8f-lrc5j\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.299932 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.496336 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8efa0859-d909-40a3-8868-2cee1b98f0dd","Type":"ContainerStarted","Data":"59715e2bd7994b4d689ef0107ad6d48d9f6a9a71f2590da96976792c8d6ddd0b"} Jan 06 14:21:46 crc kubenswrapper[4869]: I0106 14:21:46.798291 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:21:47 crc kubenswrapper[4869]: I0106 14:21:47.508352 4869 generic.go:334] "Generic (PLEG): container finished" podID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerID="54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921" exitCode=0 Jan 06 14:21:47 crc kubenswrapper[4869]: I0106 14:21:47.508412 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" event={"ID":"5fffe6a7-cbd9-4f34-a4f4-34648ceca331","Type":"ContainerDied","Data":"54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921"} Jan 06 14:21:47 crc kubenswrapper[4869]: I0106 14:21:47.508956 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" event={"ID":"5fffe6a7-cbd9-4f34-a4f4-34648ceca331","Type":"ContainerStarted","Data":"c8271a04f3c31cf7f4856cbb5294569a4d13bab1e6dfa2e0471edee1802ebf0e"} Jan 06 14:21:47 crc kubenswrapper[4869]: I0106 14:21:47.512341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46","Type":"ContainerStarted","Data":"5495f04f7af3c1e1adefce1bf69e4edd504518be2abdd0396e7cadb6fd4c6d7d"} Jan 06 14:21:48 crc kubenswrapper[4869]: I0106 14:21:48.524646 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" event={"ID":"5fffe6a7-cbd9-4f34-a4f4-34648ceca331","Type":"ContainerStarted","Data":"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1"} Jan 06 14:21:48 crc kubenswrapper[4869]: I0106 14:21:48.525108 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:48 crc kubenswrapper[4869]: I0106 14:21:48.529999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8efa0859-d909-40a3-8868-2cee1b98f0dd","Type":"ContainerStarted","Data":"01c8b177e9f6c835a8f988f896f38925b4b49c4ce96de4b7a0f204ce3b1876f2"} Jan 06 14:21:48 crc kubenswrapper[4869]: I0106 14:21:48.556376 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" podStartSLOduration=3.55632889 podStartE2EDuration="3.55632889s" podCreationTimestamp="2026-01-06 14:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:21:48.550869785 +0000 UTC m=+1327.090557479" watchObservedRunningTime="2026-01-06 14:21:48.55632889 +0000 UTC m=+1327.096016604" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.301982 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.377343 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.378458 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="dnsmasq-dns" containerID="cri-o://ceb8996443b815bcd8ee5132f6397a5614ea85ca5e3350adebfc9c05987772f3" gracePeriod=10 Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.562698 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5vx6z"] Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.564948 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.573809 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5vx6z"] Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.649729 4869 generic.go:334] "Generic (PLEG): container finished" podID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerID="ceb8996443b815bcd8ee5132f6397a5614ea85ca5e3350adebfc9c05987772f3" exitCode=0 Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.649774 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" event={"ID":"fb1f8717-036d-410e-bd16-8c42c4c9252b","Type":"ContainerDied","Data":"ceb8996443b815bcd8ee5132f6397a5614ea85ca5e3350adebfc9c05987772f3"} Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.715581 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9q2\" (UniqueName: \"kubernetes.io/projected/254ec1d5-4669-4f52-b206-ff0c28541337-kube-api-access-sb9q2\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.715647 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.715771 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.716016 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.716123 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.716158 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-config\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.820503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.827884 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.828954 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.829017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.829323 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.829964 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.830018 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-config\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.830599 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-config\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.830734 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb9q2\" (UniqueName: \"kubernetes.io/projected/254ec1d5-4669-4f52-b206-ff0c28541337-kube-api-access-sb9q2\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.830853 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.833158 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/254ec1d5-4669-4f52-b206-ff0c28541337-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.851525 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb9q2\" (UniqueName: \"kubernetes.io/projected/254ec1d5-4669-4f52-b206-ff0c28541337-kube-api-access-sb9q2\") pod \"dnsmasq-dns-864d5fc68c-5vx6z\" (UID: \"254ec1d5-4669-4f52-b206-ff0c28541337\") " pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.943210 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:21:56 crc kubenswrapper[4869]: I0106 14:21:56.951412 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.034194 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc\") pod \"fb1f8717-036d-410e-bd16-8c42c4c9252b\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.034550 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb\") pod \"fb1f8717-036d-410e-bd16-8c42c4c9252b\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.034620 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd59z\" (UniqueName: \"kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z\") pod \"fb1f8717-036d-410e-bd16-8c42c4c9252b\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.034677 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config\") pod \"fb1f8717-036d-410e-bd16-8c42c4c9252b\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.034766 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb\") pod \"fb1f8717-036d-410e-bd16-8c42c4c9252b\" (UID: \"fb1f8717-036d-410e-bd16-8c42c4c9252b\") " Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.039872 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z" (OuterVolumeSpecName: "kube-api-access-zd59z") pod "fb1f8717-036d-410e-bd16-8c42c4c9252b" (UID: "fb1f8717-036d-410e-bd16-8c42c4c9252b"). InnerVolumeSpecName "kube-api-access-zd59z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.084471 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb1f8717-036d-410e-bd16-8c42c4c9252b" (UID: "fb1f8717-036d-410e-bd16-8c42c4c9252b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.095106 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config" (OuterVolumeSpecName: "config") pod "fb1f8717-036d-410e-bd16-8c42c4c9252b" (UID: "fb1f8717-036d-410e-bd16-8c42c4c9252b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.104654 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb1f8717-036d-410e-bd16-8c42c4c9252b" (UID: "fb1f8717-036d-410e-bd16-8c42c4c9252b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.137438 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.137475 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd59z\" (UniqueName: \"kubernetes.io/projected/fb1f8717-036d-410e-bd16-8c42c4c9252b-kube-api-access-zd59z\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.137503 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.137513 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.140783 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb1f8717-036d-410e-bd16-8c42c4c9252b" (UID: "fb1f8717-036d-410e-bd16-8c42c4c9252b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.239429 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb1f8717-036d-410e-bd16-8c42c4c9252b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.407973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5vx6z"] Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.661282 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" event={"ID":"fb1f8717-036d-410e-bd16-8c42c4c9252b","Type":"ContainerDied","Data":"1b6950ff568a571da629e66c3e48702d916c44a11a6af9b3a420eb1989648437"} Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.661628 4869 scope.go:117] "RemoveContainer" containerID="ceb8996443b815bcd8ee5132f6397a5614ea85ca5e3350adebfc9c05987772f3" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.661342 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-2gz7t" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.664393 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" event={"ID":"254ec1d5-4669-4f52-b206-ff0c28541337","Type":"ContainerStarted","Data":"98109fd5215b7166684936dd33135ecd7e34c7ad4cd4b3283a5d5c0953ea586b"} Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.664630 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" event={"ID":"254ec1d5-4669-4f52-b206-ff0c28541337","Type":"ContainerStarted","Data":"4d2a11fa3fcec867e22bd40199d129607feaa9947531c5d376789da13cb737b2"} Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.723744 4869 scope.go:117] "RemoveContainer" containerID="f9d58cfe19c7627fc63b0f6b13123144de272bc6dd8a4c8493d0fac5479f2e67" Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.735627 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:21:57 crc kubenswrapper[4869]: I0106 14:21:57.742793 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-2gz7t"] Jan 06 14:21:58 crc kubenswrapper[4869]: I0106 14:21:58.673911 4869 generic.go:334] "Generic (PLEG): container finished" podID="254ec1d5-4669-4f52-b206-ff0c28541337" containerID="98109fd5215b7166684936dd33135ecd7e34c7ad4cd4b3283a5d5c0953ea586b" exitCode=0 Jan 06 14:21:58 crc kubenswrapper[4869]: I0106 14:21:58.673954 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" event={"ID":"254ec1d5-4669-4f52-b206-ff0c28541337","Type":"ContainerDied","Data":"98109fd5215b7166684936dd33135ecd7e34c7ad4cd4b3283a5d5c0953ea586b"} Jan 06 14:21:59 crc kubenswrapper[4869]: I0106 14:21:59.690506 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" event={"ID":"254ec1d5-4669-4f52-b206-ff0c28541337","Type":"ContainerStarted","Data":"c894ffcbf393cd600a9ed74805edf535a645120b7278a84e6108db75b2b9942c"} Jan 06 14:21:59 crc kubenswrapper[4869]: I0106 14:21:59.727568 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" path="/var/lib/kubelet/pods/fb1f8717-036d-410e-bd16-8c42c4c9252b/volumes" Jan 06 14:21:59 crc kubenswrapper[4869]: I0106 14:21:59.733450 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" podStartSLOduration=3.733425488 podStartE2EDuration="3.733425488s" podCreationTimestamp="2026-01-06 14:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:21:59.720518668 +0000 UTC m=+1338.260206352" watchObservedRunningTime="2026-01-06 14:21:59.733425488 +0000 UTC m=+1338.273113162" Jan 06 14:22:00 crc kubenswrapper[4869]: I0106 14:22:00.701960 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:22:03 crc kubenswrapper[4869]: I0106 14:22:03.622442 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:22:03 crc kubenswrapper[4869]: I0106 14:22:03.623882 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:22:03 crc kubenswrapper[4869]: I0106 14:22:03.624054 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:22:03 crc kubenswrapper[4869]: I0106 14:22:03.625210 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:22:03 crc kubenswrapper[4869]: I0106 14:22:03.625388 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f" gracePeriod=600 Jan 06 14:22:04 crc kubenswrapper[4869]: I0106 14:22:04.741114 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f" exitCode=0 Jan 06 14:22:04 crc kubenswrapper[4869]: I0106 14:22:04.741617 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f"} Jan 06 14:22:04 crc kubenswrapper[4869]: I0106 14:22:04.741644 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f"} Jan 06 14:22:04 crc kubenswrapper[4869]: I0106 14:22:04.741676 4869 scope.go:117] "RemoveContainer" containerID="761debf1eef98bf25e5ef97d0bbf7309e01c1e5b01dc714bc8dcd3f2a34d299e" Jan 06 14:22:06 crc kubenswrapper[4869]: I0106 14:22:06.953962 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-5vx6z" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.021276 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.021646 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="dnsmasq-dns" containerID="cri-o://b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1" gracePeriod=10 Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.566483 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.651783 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.651927 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.653561 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.653635 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.653723 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.653785 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxrwv\" (UniqueName: \"kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv\") pod \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\" (UID: \"5fffe6a7-cbd9-4f34-a4f4-34648ceca331\") " Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.660655 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv" (OuterVolumeSpecName: "kube-api-access-sxrwv") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "kube-api-access-sxrwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.708871 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.709311 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.709831 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.719490 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.732478 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config" (OuterVolumeSpecName: "config") pod "5fffe6a7-cbd9-4f34-a4f4-34648ceca331" (UID: "5fffe6a7-cbd9-4f34-a4f4-34648ceca331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756739 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756781 4869 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756795 4869 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-config\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756806 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756818 4869 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.756827 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxrwv\" (UniqueName: \"kubernetes.io/projected/5fffe6a7-cbd9-4f34-a4f4-34648ceca331-kube-api-access-sxrwv\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.787020 4869 generic.go:334] "Generic (PLEG): container finished" podID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerID="b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1" exitCode=0 Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.787077 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" event={"ID":"5fffe6a7-cbd9-4f34-a4f4-34648ceca331","Type":"ContainerDied","Data":"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1"} Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.787108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" event={"ID":"5fffe6a7-cbd9-4f34-a4f4-34648ceca331","Type":"ContainerDied","Data":"c8271a04f3c31cf7f4856cbb5294569a4d13bab1e6dfa2e0471edee1802ebf0e"} Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.787131 4869 scope.go:117] "RemoveContainer" containerID="b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.787286 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-lrc5j" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.813020 4869 scope.go:117] "RemoveContainer" containerID="54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.827346 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.827445 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-lrc5j"] Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.875941 4869 scope.go:117] "RemoveContainer" containerID="b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1" Jan 06 14:22:07 crc kubenswrapper[4869]: E0106 14:22:07.876347 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1\": container with ID starting with b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1 not found: ID does not exist" containerID="b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.876380 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1"} err="failed to get container status \"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1\": rpc error: code = NotFound desc = could not find container \"b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1\": container with ID starting with b71f3edea315454bc68f61d9742133767a8724d566de8e34f5e1a937b65945c1 not found: ID does not exist" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.876426 4869 scope.go:117] "RemoveContainer" containerID="54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921" Jan 06 14:22:07 crc kubenswrapper[4869]: E0106 14:22:07.876708 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921\": container with ID starting with 54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921 not found: ID does not exist" containerID="54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921" Jan 06 14:22:07 crc kubenswrapper[4869]: I0106 14:22:07.876730 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921"} err="failed to get container status \"54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921\": rpc error: code = NotFound desc = could not find container \"54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921\": container with ID starting with 54aa70201496b02a4a7417503ee88dfa657053727801bde234965673b2ac7921 not found: ID does not exist" Jan 06 14:22:09 crc kubenswrapper[4869]: I0106 14:22:09.721659 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" path="/var/lib/kubelet/pods/5fffe6a7-cbd9-4f34-a4f4-34648ceca331/volumes" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.948942 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q"] Jan 06 14:22:16 crc kubenswrapper[4869]: E0106 14:22:16.949975 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.949994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: E0106 14:22:16.950010 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="init" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.950018 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="init" Jan 06 14:22:16 crc kubenswrapper[4869]: E0106 14:22:16.950055 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="init" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.950062 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="init" Jan 06 14:22:16 crc kubenswrapper[4869]: E0106 14:22:16.950075 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.950083 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.950284 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb1f8717-036d-410e-bd16-8c42c4c9252b" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.950299 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fffe6a7-cbd9-4f34-a4f4-34648ceca331" containerName="dnsmasq-dns" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.951121 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.958176 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.958490 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.958504 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.958623 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:22:16 crc kubenswrapper[4869]: I0106 14:22:16.990409 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q"] Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.032603 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.032651 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zf9s\" (UniqueName: \"kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.032742 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.032795 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.134922 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.135365 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zf9s\" (UniqueName: \"kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.135413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.135490 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.144404 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.144800 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.150518 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.157142 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zf9s\" (UniqueName: \"kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.279330 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.810270 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q"] Jan 06 14:22:17 crc kubenswrapper[4869]: I0106 14:22:17.894755 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" event={"ID":"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761","Type":"ContainerStarted","Data":"a0e2a44507b1334e3e4cffeb7ba23e81e034eda830291630382a23881f2d01c3"} Jan 06 14:22:19 crc kubenswrapper[4869]: I0106 14:22:19.918510 4869 generic.go:334] "Generic (PLEG): container finished" podID="84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46" containerID="5495f04f7af3c1e1adefce1bf69e4edd504518be2abdd0396e7cadb6fd4c6d7d" exitCode=0 Jan 06 14:22:19 crc kubenswrapper[4869]: I0106 14:22:19.919199 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46","Type":"ContainerDied","Data":"5495f04f7af3c1e1adefce1bf69e4edd504518be2abdd0396e7cadb6fd4c6d7d"} Jan 06 14:22:20 crc kubenswrapper[4869]: I0106 14:22:20.931525 4869 generic.go:334] "Generic (PLEG): container finished" podID="8efa0859-d909-40a3-8868-2cee1b98f0dd" containerID="01c8b177e9f6c835a8f988f896f38925b4b49c4ce96de4b7a0f204ce3b1876f2" exitCode=0 Jan 06 14:22:20 crc kubenswrapper[4869]: I0106 14:22:20.931864 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8efa0859-d909-40a3-8868-2cee1b98f0dd","Type":"ContainerDied","Data":"01c8b177e9f6c835a8f988f896f38925b4b49c4ce96de4b7a0f204ce3b1876f2"} Jan 06 14:22:20 crc kubenswrapper[4869]: I0106 14:22:20.943056 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46","Type":"ContainerStarted","Data":"4fe82d6af2092b71d92197e8df2d192305d10a74321dc0547d9cfffc6e5048a7"} Jan 06 14:22:20 crc kubenswrapper[4869]: I0106 14:22:20.945801 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 06 14:22:21 crc kubenswrapper[4869]: I0106 14:22:21.736853 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.736808638 podStartE2EDuration="38.736808638s" podCreationTimestamp="2026-01-06 14:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:22:21.002417434 +0000 UTC m=+1359.542105108" watchObservedRunningTime="2026-01-06 14:22:21.736808638 +0000 UTC m=+1360.276496302" Jan 06 14:22:21 crc kubenswrapper[4869]: I0106 14:22:21.955862 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8efa0859-d909-40a3-8868-2cee1b98f0dd","Type":"ContainerStarted","Data":"47bad1bf4be0aaf652b823a4d3c872fee55738983a3bb900b2a8fbe879130974"} Jan 06 14:22:21 crc kubenswrapper[4869]: I0106 14:22:21.956595 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:22:21 crc kubenswrapper[4869]: I0106 14:22:21.987025 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.986992078 podStartE2EDuration="37.986992078s" podCreationTimestamp="2026-01-06 14:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 14:22:21.985764278 +0000 UTC m=+1360.525451962" watchObservedRunningTime="2026-01-06 14:22:21.986992078 +0000 UTC m=+1360.526679762" Jan 06 14:22:29 crc kubenswrapper[4869]: I0106 14:22:29.076889 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" event={"ID":"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761","Type":"ContainerStarted","Data":"68ce1357359b3094465a3024e49358c0c13ea052405d75e7af6cd30c1a391016"} Jan 06 14:22:29 crc kubenswrapper[4869]: I0106 14:22:29.111484 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" podStartSLOduration=2.782290431 podStartE2EDuration="13.111456178s" podCreationTimestamp="2026-01-06 14:22:16 +0000 UTC" firstStartedPulling="2026-01-06 14:22:17.821397635 +0000 UTC m=+1356.361085299" lastFinishedPulling="2026-01-06 14:22:28.150563382 +0000 UTC m=+1366.690251046" observedRunningTime="2026-01-06 14:22:29.10150957 +0000 UTC m=+1367.641197234" watchObservedRunningTime="2026-01-06 14:22:29.111456178 +0000 UTC m=+1367.651143852" Jan 06 14:22:34 crc kubenswrapper[4869]: I0106 14:22:34.176657 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 06 14:22:35 crc kubenswrapper[4869]: I0106 14:22:35.168892 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 06 14:22:40 crc kubenswrapper[4869]: I0106 14:22:40.183912 4869 generic.go:334] "Generic (PLEG): container finished" podID="0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" containerID="68ce1357359b3094465a3024e49358c0c13ea052405d75e7af6cd30c1a391016" exitCode=0 Jan 06 14:22:40 crc kubenswrapper[4869]: I0106 14:22:40.183999 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" event={"ID":"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761","Type":"ContainerDied","Data":"68ce1357359b3094465a3024e49358c0c13ea052405d75e7af6cd30c1a391016"} Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.643210 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.774239 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle\") pod \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.774692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam\") pod \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.774807 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zf9s\" (UniqueName: \"kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s\") pod \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.774917 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory\") pod \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\" (UID: \"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761\") " Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.784804 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" (UID: "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.784914 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s" (OuterVolumeSpecName: "kube-api-access-4zf9s") pod "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" (UID: "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761"). InnerVolumeSpecName "kube-api-access-4zf9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.822989 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory" (OuterVolumeSpecName: "inventory") pod "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" (UID: "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.826802 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" (UID: "0bfa4b45-9040-4ea6-b8e1-0fd641cb4761"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.877990 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.878268 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.878349 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zf9s\" (UniqueName: \"kubernetes.io/projected/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-kube-api-access-4zf9s\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:41 crc kubenswrapper[4869]: I0106 14:22:41.878425 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.201495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" event={"ID":"0bfa4b45-9040-4ea6-b8e1-0fd641cb4761","Type":"ContainerDied","Data":"a0e2a44507b1334e3e4cffeb7ba23e81e034eda830291630382a23881f2d01c3"} Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.201915 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e2a44507b1334e3e4cffeb7ba23e81e034eda830291630382a23881f2d01c3" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.201826 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.285163 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6"] Jan 06 14:22:42 crc kubenswrapper[4869]: E0106 14:22:42.285793 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.285826 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.286211 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.287408 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.289246 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.289775 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.290211 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.293549 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.296187 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6"] Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.388941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.389066 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2xdp\" (UniqueName: \"kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.389205 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.389319 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.491656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.492043 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.492185 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2xdp\" (UniqueName: \"kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.492305 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.497921 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.499584 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.499915 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.511793 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2xdp\" (UniqueName: \"kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:42 crc kubenswrapper[4869]: I0106 14:22:42.607047 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:22:43 crc kubenswrapper[4869]: W0106 14:22:43.167866 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod496e118b_1f27_46d6_aca1_5060fb3ba1aa.slice/crio-f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf WatchSource:0}: Error finding container f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf: Status 404 returned error can't find the container with id f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf Jan 06 14:22:43 crc kubenswrapper[4869]: I0106 14:22:43.172565 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6"] Jan 06 14:22:43 crc kubenswrapper[4869]: I0106 14:22:43.212766 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" event={"ID":"496e118b-1f27-46d6-aca1-5060fb3ba1aa","Type":"ContainerStarted","Data":"f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf"} Jan 06 14:22:43 crc kubenswrapper[4869]: I0106 14:22:43.585832 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:22:44 crc kubenswrapper[4869]: I0106 14:22:44.224583 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" event={"ID":"496e118b-1f27-46d6-aca1-5060fb3ba1aa","Type":"ContainerStarted","Data":"4b10df9a0bda839b5273b1b6703d809d616346977921fd72fa1951d2a4a0512a"} Jan 06 14:22:44 crc kubenswrapper[4869]: I0106 14:22:44.253655 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" podStartSLOduration=1.842755803 podStartE2EDuration="2.253620198s" podCreationTimestamp="2026-01-06 14:22:42 +0000 UTC" firstStartedPulling="2026-01-06 14:22:43.170721409 +0000 UTC m=+1381.710409083" lastFinishedPulling="2026-01-06 14:22:43.581585804 +0000 UTC m=+1382.121273478" observedRunningTime="2026-01-06 14:22:44.24155806 +0000 UTC m=+1382.781245714" watchObservedRunningTime="2026-01-06 14:22:44.253620198 +0000 UTC m=+1382.793307902" Jan 06 14:23:51 crc kubenswrapper[4869]: I0106 14:23:51.962443 4869 scope.go:117] "RemoveContainer" containerID="379510929f2b336c0c77e9b2c5dd736a6c522b38e1119b717ce36452c40dd080" Jan 06 14:23:52 crc kubenswrapper[4869]: I0106 14:23:52.005239 4869 scope.go:117] "RemoveContainer" containerID="84be22290d5b8f49531beba7ffe3725ef350e30bb3b6ceaaf9e2a33c055f6a51" Jan 06 14:24:33 crc kubenswrapper[4869]: I0106 14:24:33.622401 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:24:33 crc kubenswrapper[4869]: I0106 14:24:33.623241 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:24:52 crc kubenswrapper[4869]: I0106 14:24:52.163247 4869 scope.go:117] "RemoveContainer" containerID="35f0ce42103960943511a227bad87a2055f5eca58a84c08a36e548ccd5d9584e" Jan 06 14:25:03 crc kubenswrapper[4869]: I0106 14:25:03.622865 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:25:03 crc kubenswrapper[4869]: I0106 14:25:03.623502 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:25:33 crc kubenswrapper[4869]: I0106 14:25:33.622326 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:25:33 crc kubenswrapper[4869]: I0106 14:25:33.623546 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:25:33 crc kubenswrapper[4869]: I0106 14:25:33.623643 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:25:33 crc kubenswrapper[4869]: I0106 14:25:33.625143 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:25:33 crc kubenswrapper[4869]: I0106 14:25:33.625263 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" gracePeriod=600 Jan 06 14:25:33 crc kubenswrapper[4869]: E0106 14:25:33.754238 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:25:34 crc kubenswrapper[4869]: I0106 14:25:34.020301 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" exitCode=0 Jan 06 14:25:34 crc kubenswrapper[4869]: I0106 14:25:34.020351 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f"} Jan 06 14:25:34 crc kubenswrapper[4869]: I0106 14:25:34.020405 4869 scope.go:117] "RemoveContainer" containerID="a332af473bcbaead046814c5bfbced58c6de6afeca96a8b9d1a45f6d711dbe8f" Jan 06 14:25:34 crc kubenswrapper[4869]: I0106 14:25:34.021020 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:25:34 crc kubenswrapper[4869]: E0106 14:25:34.021482 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:25:46 crc kubenswrapper[4869]: I0106 14:25:46.704876 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:25:46 crc kubenswrapper[4869]: E0106 14:25:46.707758 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:25:57 crc kubenswrapper[4869]: I0106 14:25:57.260947 4869 generic.go:334] "Generic (PLEG): container finished" podID="496e118b-1f27-46d6-aca1-5060fb3ba1aa" containerID="4b10df9a0bda839b5273b1b6703d809d616346977921fd72fa1951d2a4a0512a" exitCode=0 Jan 06 14:25:57 crc kubenswrapper[4869]: I0106 14:25:57.261037 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" event={"ID":"496e118b-1f27-46d6-aca1-5060fb3ba1aa","Type":"ContainerDied","Data":"4b10df9a0bda839b5273b1b6703d809d616346977921fd72fa1951d2a4a0512a"} Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.694296 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.889039 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2xdp\" (UniqueName: \"kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp\") pod \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.889158 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam\") pod \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.889276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle\") pod \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.889327 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory\") pod \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\" (UID: \"496e118b-1f27-46d6-aca1-5060fb3ba1aa\") " Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.898839 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp" (OuterVolumeSpecName: "kube-api-access-x2xdp") pod "496e118b-1f27-46d6-aca1-5060fb3ba1aa" (UID: "496e118b-1f27-46d6-aca1-5060fb3ba1aa"). InnerVolumeSpecName "kube-api-access-x2xdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.924492 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "496e118b-1f27-46d6-aca1-5060fb3ba1aa" (UID: "496e118b-1f27-46d6-aca1-5060fb3ba1aa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.945948 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory" (OuterVolumeSpecName: "inventory") pod "496e118b-1f27-46d6-aca1-5060fb3ba1aa" (UID: "496e118b-1f27-46d6-aca1-5060fb3ba1aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.960754 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "496e118b-1f27-46d6-aca1-5060fb3ba1aa" (UID: "496e118b-1f27-46d6-aca1-5060fb3ba1aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.991798 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2xdp\" (UniqueName: \"kubernetes.io/projected/496e118b-1f27-46d6-aca1-5060fb3ba1aa-kube-api-access-x2xdp\") on node \"crc\" DevicePath \"\"" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.991833 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.991844 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:25:58 crc kubenswrapper[4869]: I0106 14:25:58.991853 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/496e118b-1f27-46d6-aca1-5060fb3ba1aa-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.279967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" event={"ID":"496e118b-1f27-46d6-aca1-5060fb3ba1aa","Type":"ContainerDied","Data":"f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf"} Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.280006 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04da1bf9b4a2ebff1fadd6bd41bb50a2739f0d89db451c5b89ffa4d3c5cdcbf" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.280048 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.378984 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct"] Jan 06 14:25:59 crc kubenswrapper[4869]: E0106 14:25:59.379431 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="496e118b-1f27-46d6-aca1-5060fb3ba1aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.379455 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="496e118b-1f27-46d6-aca1-5060fb3ba1aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.379709 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="496e118b-1f27-46d6-aca1-5060fb3ba1aa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.380346 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.382151 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.382945 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.383364 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.383703 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.392849 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct"] Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.510669 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.510791 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.510816 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm5q5\" (UniqueName: \"kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.612503 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.612560 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm5q5\" (UniqueName: \"kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.612721 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.619191 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.619851 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.629274 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm5q5\" (UniqueName: \"kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-8zqct\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:25:59 crc kubenswrapper[4869]: I0106 14:25:59.713499 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:26:00 crc kubenswrapper[4869]: I0106 14:26:00.287509 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct"] Jan 06 14:26:00 crc kubenswrapper[4869]: I0106 14:26:00.299175 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:26:00 crc kubenswrapper[4869]: I0106 14:26:00.705103 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:26:00 crc kubenswrapper[4869]: E0106 14:26:00.705766 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:26:01 crc kubenswrapper[4869]: I0106 14:26:01.296545 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" event={"ID":"809af13b-e2f3-4eed-a5dc-9de20cab3ef4","Type":"ContainerStarted","Data":"9140e1c73268098bf770faeae60957604859e22ed806a4d1f7848347b08653b5"} Jan 06 14:26:01 crc kubenswrapper[4869]: I0106 14:26:01.296586 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" event={"ID":"809af13b-e2f3-4eed-a5dc-9de20cab3ef4","Type":"ContainerStarted","Data":"5e8bd975a2d69df30de24008aff1a6a9625831fb310f4617990caa63919d4d95"} Jan 06 14:26:01 crc kubenswrapper[4869]: I0106 14:26:01.321341 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" podStartSLOduration=1.8655556720000002 podStartE2EDuration="2.321322213s" podCreationTimestamp="2026-01-06 14:25:59 +0000 UTC" firstStartedPulling="2026-01-06 14:26:00.298901224 +0000 UTC m=+1578.838588888" lastFinishedPulling="2026-01-06 14:26:00.754667765 +0000 UTC m=+1579.294355429" observedRunningTime="2026-01-06 14:26:01.311756724 +0000 UTC m=+1579.851444388" watchObservedRunningTime="2026-01-06 14:26:01.321322213 +0000 UTC m=+1579.861009877" Jan 06 14:26:11 crc kubenswrapper[4869]: I0106 14:26:11.711597 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:26:11 crc kubenswrapper[4869]: E0106 14:26:11.712542 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.337458 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.341561 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.352276 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.473274 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.473378 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8x5\" (UniqueName: \"kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.473441 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.574929 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.575021 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr8x5\" (UniqueName: \"kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.575080 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.575572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.575588 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.596563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr8x5\" (UniqueName: \"kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5\") pod \"redhat-operators-r2dtz\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:13 crc kubenswrapper[4869]: I0106 14:26:13.662174 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:14 crc kubenswrapper[4869]: I0106 14:26:14.119693 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:14 crc kubenswrapper[4869]: I0106 14:26:14.411577 4869 generic.go:334] "Generic (PLEG): container finished" podID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerID="b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689" exitCode=0 Jan 06 14:26:14 crc kubenswrapper[4869]: I0106 14:26:14.412783 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerDied","Data":"b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689"} Jan 06 14:26:14 crc kubenswrapper[4869]: I0106 14:26:14.413020 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerStarted","Data":"11c4099d624986947e86fc7b8e4c2b1eb6b659c3472f56aef836e8694592a2bd"} Jan 06 14:26:16 crc kubenswrapper[4869]: I0106 14:26:16.436778 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerStarted","Data":"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2"} Jan 06 14:26:18 crc kubenswrapper[4869]: I0106 14:26:18.457506 4869 generic.go:334] "Generic (PLEG): container finished" podID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerID="670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2" exitCode=0 Jan 06 14:26:18 crc kubenswrapper[4869]: I0106 14:26:18.457573 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerDied","Data":"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2"} Jan 06 14:26:20 crc kubenswrapper[4869]: I0106 14:26:20.484311 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerStarted","Data":"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297"} Jan 06 14:26:20 crc kubenswrapper[4869]: I0106 14:26:20.506998 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r2dtz" podStartSLOduration=2.367665199 podStartE2EDuration="7.506977323s" podCreationTimestamp="2026-01-06 14:26:13 +0000 UTC" firstStartedPulling="2026-01-06 14:26:14.413220148 +0000 UTC m=+1592.952907812" lastFinishedPulling="2026-01-06 14:26:19.552532262 +0000 UTC m=+1598.092219936" observedRunningTime="2026-01-06 14:26:20.502621184 +0000 UTC m=+1599.042308848" watchObservedRunningTime="2026-01-06 14:26:20.506977323 +0000 UTC m=+1599.046664997" Jan 06 14:26:23 crc kubenswrapper[4869]: I0106 14:26:23.663122 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:23 crc kubenswrapper[4869]: I0106 14:26:23.664859 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:24 crc kubenswrapper[4869]: I0106 14:26:24.742090 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r2dtz" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="registry-server" probeResult="failure" output=< Jan 06 14:26:24 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 06 14:26:24 crc kubenswrapper[4869]: > Jan 06 14:26:25 crc kubenswrapper[4869]: I0106 14:26:25.707153 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:26:25 crc kubenswrapper[4869]: E0106 14:26:25.707391 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:26:33 crc kubenswrapper[4869]: I0106 14:26:33.753258 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:33 crc kubenswrapper[4869]: I0106 14:26:33.804348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:34 crc kubenswrapper[4869]: I0106 14:26:34.000415 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:35 crc kubenswrapper[4869]: I0106 14:26:35.653444 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r2dtz" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="registry-server" containerID="cri-o://db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297" gracePeriod=2 Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.144315 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.274730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities\") pod \"3af76621-923d-4846-a2a3-2cff33a4cfc2\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.275314 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content\") pod \"3af76621-923d-4846-a2a3-2cff33a4cfc2\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.275364 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8x5\" (UniqueName: \"kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5\") pod \"3af76621-923d-4846-a2a3-2cff33a4cfc2\" (UID: \"3af76621-923d-4846-a2a3-2cff33a4cfc2\") " Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.275977 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities" (OuterVolumeSpecName: "utilities") pod "3af76621-923d-4846-a2a3-2cff33a4cfc2" (UID: "3af76621-923d-4846-a2a3-2cff33a4cfc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.281030 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5" (OuterVolumeSpecName: "kube-api-access-hr8x5") pod "3af76621-923d-4846-a2a3-2cff33a4cfc2" (UID: "3af76621-923d-4846-a2a3-2cff33a4cfc2"). InnerVolumeSpecName "kube-api-access-hr8x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.377441 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr8x5\" (UniqueName: \"kubernetes.io/projected/3af76621-923d-4846-a2a3-2cff33a4cfc2-kube-api-access-hr8x5\") on node \"crc\" DevicePath \"\"" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.377476 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.385073 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af76621-923d-4846-a2a3-2cff33a4cfc2" (UID: "3af76621-923d-4846-a2a3-2cff33a4cfc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.479694 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af76621-923d-4846-a2a3-2cff33a4cfc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.667027 4869 generic.go:334] "Generic (PLEG): container finished" podID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerID="db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297" exitCode=0 Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.667071 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerDied","Data":"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297"} Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.667108 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r2dtz" event={"ID":"3af76621-923d-4846-a2a3-2cff33a4cfc2","Type":"ContainerDied","Data":"11c4099d624986947e86fc7b8e4c2b1eb6b659c3472f56aef836e8694592a2bd"} Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.667139 4869 scope.go:117] "RemoveContainer" containerID="db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.667141 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r2dtz" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.694895 4869 scope.go:117] "RemoveContainer" containerID="670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.704698 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.717447 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r2dtz"] Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.736047 4869 scope.go:117] "RemoveContainer" containerID="b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.758685 4869 scope.go:117] "RemoveContainer" containerID="db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297" Jan 06 14:26:36 crc kubenswrapper[4869]: E0106 14:26:36.759203 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297\": container with ID starting with db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297 not found: ID does not exist" containerID="db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.759240 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297"} err="failed to get container status \"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297\": rpc error: code = NotFound desc = could not find container \"db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297\": container with ID starting with db9f498d00f995290cbf2411b9155f0641748b4c1ff2062bcf85baae7d4a7297 not found: ID does not exist" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.759277 4869 scope.go:117] "RemoveContainer" containerID="670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2" Jan 06 14:26:36 crc kubenswrapper[4869]: E0106 14:26:36.759855 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2\": container with ID starting with 670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2 not found: ID does not exist" containerID="670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.759879 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2"} err="failed to get container status \"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2\": rpc error: code = NotFound desc = could not find container \"670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2\": container with ID starting with 670ee22e8d738fb5b643d0b5ccc3880e5ba4c5928c5daccd26dae4cd5bfc5ac2 not found: ID does not exist" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.759896 4869 scope.go:117] "RemoveContainer" containerID="b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689" Jan 06 14:26:36 crc kubenswrapper[4869]: E0106 14:26:36.760336 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689\": container with ID starting with b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689 not found: ID does not exist" containerID="b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689" Jan 06 14:26:36 crc kubenswrapper[4869]: I0106 14:26:36.760357 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689"} err="failed to get container status \"b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689\": rpc error: code = NotFound desc = could not find container \"b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689\": container with ID starting with b800eb49f6cc20b3a537131cc4490c1a69b18d9809cf0fdd8024c6256f0ec689 not found: ID does not exist" Jan 06 14:26:37 crc kubenswrapper[4869]: I0106 14:26:37.714938 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" path="/var/lib/kubelet/pods/3af76621-923d-4846-a2a3-2cff33a4cfc2/volumes" Jan 06 14:26:39 crc kubenswrapper[4869]: I0106 14:26:39.704879 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:26:39 crc kubenswrapper[4869]: E0106 14:26:39.705358 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.760633 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:26:49 crc kubenswrapper[4869]: E0106 14:26:49.761826 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="extract-utilities" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.761846 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="extract-utilities" Jan 06 14:26:49 crc kubenswrapper[4869]: E0106 14:26:49.761885 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="registry-server" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.761895 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="registry-server" Jan 06 14:26:49 crc kubenswrapper[4869]: E0106 14:26:49.761912 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="extract-content" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.761921 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="extract-content" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.762133 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af76621-923d-4846-a2a3-2cff33a4cfc2" containerName="registry-server" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.763706 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.768516 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.768785 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.768907 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ht7f\" (UniqueName: \"kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.784600 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.870482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.870542 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.870562 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ht7f\" (UniqueName: \"kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.871106 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.871169 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:49 crc kubenswrapper[4869]: I0106 14:26:49.890363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ht7f\" (UniqueName: \"kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f\") pod \"certified-operators-9b5ld\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.095427 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.631059 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.704539 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:26:50 crc kubenswrapper[4869]: E0106 14:26:50.705036 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.837396 4869 generic.go:334] "Generic (PLEG): container finished" podID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerID="14129807b691d702ae6a174ef0df839cda0831521fae4e8c0e05ab4049c6059c" exitCode=0 Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.837454 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerDied","Data":"14129807b691d702ae6a174ef0df839cda0831521fae4e8c0e05ab4049c6059c"} Jan 06 14:26:50 crc kubenswrapper[4869]: I0106 14:26:50.837489 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerStarted","Data":"65a62fb311df10f8308e7f90117b9b60368aacf654b16b62f9cee532c47a2fab"} Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.296461 4869 scope.go:117] "RemoveContainer" containerID="fbe21376fbb30002898422b545304b8279733bb9bd2d188fbe6d9453a8dc8bb2" Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.326687 4869 scope.go:117] "RemoveContainer" containerID="1328b103741f8aa046f192bbeb9defccb1fb83183ab89fa218846e53e49abfea" Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.354886 4869 scope.go:117] "RemoveContainer" containerID="1808fa89d2dc5f3f46815bc81f9f1cd07ff7f711c1510977ffe176f830be52f4" Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.376225 4869 scope.go:117] "RemoveContainer" containerID="6d75bd8622f014916acb8f34fa72bba5a875363533ba5f5e4882350b4ea586c2" Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.867616 4869 generic.go:334] "Generic (PLEG): container finished" podID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerID="36b2853b105e021626ffd80e8a4fcc69b950ad7ef7dd521863a3a822fb240aea" exitCode=0 Jan 06 14:26:52 crc kubenswrapper[4869]: I0106 14:26:52.867708 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerDied","Data":"36b2853b105e021626ffd80e8a4fcc69b950ad7ef7dd521863a3a822fb240aea"} Jan 06 14:26:53 crc kubenswrapper[4869]: I0106 14:26:53.881330 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerStarted","Data":"40dfe1a6fce454f784da5f857b15792e96636ab0f30f698865cd994cf3ea7a06"} Jan 06 14:26:53 crc kubenswrapper[4869]: I0106 14:26:53.899105 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9b5ld" podStartSLOduration=2.4336787810000002 podStartE2EDuration="4.899088074s" podCreationTimestamp="2026-01-06 14:26:49 +0000 UTC" firstStartedPulling="2026-01-06 14:26:50.839379584 +0000 UTC m=+1629.379067268" lastFinishedPulling="2026-01-06 14:26:53.304788897 +0000 UTC m=+1631.844476561" observedRunningTime="2026-01-06 14:26:53.895637769 +0000 UTC m=+1632.435325443" watchObservedRunningTime="2026-01-06 14:26:53.899088074 +0000 UTC m=+1632.438775738" Jan 06 14:27:00 crc kubenswrapper[4869]: I0106 14:27:00.096593 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:00 crc kubenswrapper[4869]: I0106 14:27:00.097081 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:00 crc kubenswrapper[4869]: I0106 14:27:00.155711 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:01 crc kubenswrapper[4869]: I0106 14:27:01.017990 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:01 crc kubenswrapper[4869]: I0106 14:27:01.076015 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:27:02 crc kubenswrapper[4869]: I0106 14:27:02.967259 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9b5ld" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="registry-server" containerID="cri-o://40dfe1a6fce454f784da5f857b15792e96636ab0f30f698865cd994cf3ea7a06" gracePeriod=2 Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.705820 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:27:03 crc kubenswrapper[4869]: E0106 14:27:03.706998 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.978478 4869 generic.go:334] "Generic (PLEG): container finished" podID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerID="40dfe1a6fce454f784da5f857b15792e96636ab0f30f698865cd994cf3ea7a06" exitCode=0 Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.979934 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerDied","Data":"40dfe1a6fce454f784da5f857b15792e96636ab0f30f698865cd994cf3ea7a06"} Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.980053 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9b5ld" event={"ID":"0cf21d06-9261-41a9-b35c-00604d0e47a8","Type":"ContainerDied","Data":"65a62fb311df10f8308e7f90117b9b60368aacf654b16b62f9cee532c47a2fab"} Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.980137 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a62fb311df10f8308e7f90117b9b60368aacf654b16b62f9cee532c47a2fab" Jan 06 14:27:03 crc kubenswrapper[4869]: I0106 14:27:03.994507 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.161307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content\") pod \"0cf21d06-9261-41a9-b35c-00604d0e47a8\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.161464 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities\") pod \"0cf21d06-9261-41a9-b35c-00604d0e47a8\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.161511 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ht7f\" (UniqueName: \"kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f\") pod \"0cf21d06-9261-41a9-b35c-00604d0e47a8\" (UID: \"0cf21d06-9261-41a9-b35c-00604d0e47a8\") " Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.162945 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities" (OuterVolumeSpecName: "utilities") pod "0cf21d06-9261-41a9-b35c-00604d0e47a8" (UID: "0cf21d06-9261-41a9-b35c-00604d0e47a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.167748 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f" (OuterVolumeSpecName: "kube-api-access-8ht7f") pod "0cf21d06-9261-41a9-b35c-00604d0e47a8" (UID: "0cf21d06-9261-41a9-b35c-00604d0e47a8"). InnerVolumeSpecName "kube-api-access-8ht7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.212069 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cf21d06-9261-41a9-b35c-00604d0e47a8" (UID: "0cf21d06-9261-41a9-b35c-00604d0e47a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.264537 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.264581 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ht7f\" (UniqueName: \"kubernetes.io/projected/0cf21d06-9261-41a9-b35c-00604d0e47a8-kube-api-access-8ht7f\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.264595 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cf21d06-9261-41a9-b35c-00604d0e47a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:04 crc kubenswrapper[4869]: I0106 14:27:04.986639 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9b5ld" Jan 06 14:27:05 crc kubenswrapper[4869]: I0106 14:27:05.020492 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:27:05 crc kubenswrapper[4869]: I0106 14:27:05.031506 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9b5ld"] Jan 06 14:27:05 crc kubenswrapper[4869]: I0106 14:27:05.721046 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" path="/var/lib/kubelet/pods/0cf21d06-9261-41a9-b35c-00604d0e47a8/volumes" Jan 06 14:27:10 crc kubenswrapper[4869]: I0106 14:27:10.086072 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-7jvwr"] Jan 06 14:27:10 crc kubenswrapper[4869]: I0106 14:27:10.098422 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-7jvwr"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.054024 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2p5s8"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.071732 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9c77-account-create-update-4n55x"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.097872 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2p5s8"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.111725 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1cd5-account-create-update-chlpx"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.125809 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9c77-account-create-update-4n55x"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.138309 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1cd5-account-create-update-chlpx"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.750508 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8d7b8f-1d96-4295-8fd6-954a19ecbbed" path="/var/lib/kubelet/pods/8e8d7b8f-1d96-4295-8fd6-954a19ecbbed/volumes" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.753389 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe5915b-db4a-4fe4-9b6f-7bb930727ccf" path="/var/lib/kubelet/pods/9fe5915b-db4a-4fe4-9b6f-7bb930727ccf/volumes" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.754984 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b17ac2-fe53-4c33-9cf0-3142a52dc576" path="/var/lib/kubelet/pods/b7b17ac2-fe53-4c33-9cf0-3142a52dc576/volumes" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.756174 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7d8c02b-1c43-45e6-b42c-28b229c349be" path="/var/lib/kubelet/pods/f7d8c02b-1c43-45e6-b42c-28b229c349be/volumes" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.757939 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:11 crc kubenswrapper[4869]: E0106 14:27:11.761105 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="registry-server" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.761321 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="registry-server" Jan 06 14:27:11 crc kubenswrapper[4869]: E0106 14:27:11.761405 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="extract-utilities" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.761489 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="extract-utilities" Jan 06 14:27:11 crc kubenswrapper[4869]: E0106 14:27:11.761573 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="extract-content" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.761646 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="extract-content" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.762009 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf21d06-9261-41a9-b35c-00604d0e47a8" containerName="registry-server" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.763900 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.766109 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.820440 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.820531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndjj6\" (UniqueName: \"kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.820683 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.922698 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.923123 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndjj6\" (UniqueName: \"kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.923324 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.923344 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.923742 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:11 crc kubenswrapper[4869]: I0106 14:27:11.947806 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndjj6\" (UniqueName: \"kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6\") pod \"redhat-marketplace-t5jkf\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.028115 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-9e7b-account-create-update-2xfb5"] Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.036447 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-9e7b-account-create-update-2xfb5"] Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.045730 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-694hw"] Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.053658 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-694hw"] Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.092779 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:12 crc kubenswrapper[4869]: I0106 14:27:12.479483 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:13 crc kubenswrapper[4869]: I0106 14:27:13.108590 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerID="eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8" exitCode=0 Jan 06 14:27:13 crc kubenswrapper[4869]: I0106 14:27:13.108659 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerDied","Data":"eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8"} Jan 06 14:27:13 crc kubenswrapper[4869]: I0106 14:27:13.108982 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerStarted","Data":"d8cbed252dbc9f17b47ea3ea86a3e93a050d1b9e8757e9b726f1f380fd31583e"} Jan 06 14:27:13 crc kubenswrapper[4869]: I0106 14:27:13.737404 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f47b080-a76c-4e14-bc35-6144be23522c" path="/var/lib/kubelet/pods/8f47b080-a76c-4e14-bc35-6144be23522c/volumes" Jan 06 14:27:13 crc kubenswrapper[4869]: I0106 14:27:13.738920 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84938b5-25e0-41d4-b97f-930d703f54e9" path="/var/lib/kubelet/pods/e84938b5-25e0-41d4-b97f-930d703f54e9/volumes" Jan 06 14:27:14 crc kubenswrapper[4869]: I0106 14:27:14.122913 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerID="fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b" exitCode=0 Jan 06 14:27:14 crc kubenswrapper[4869]: I0106 14:27:14.122964 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerDied","Data":"fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b"} Jan 06 14:27:15 crc kubenswrapper[4869]: I0106 14:27:15.133534 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerStarted","Data":"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635"} Jan 06 14:27:15 crc kubenswrapper[4869]: I0106 14:27:15.164220 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5jkf" podStartSLOduration=2.664559063 podStartE2EDuration="4.164192044s" podCreationTimestamp="2026-01-06 14:27:11 +0000 UTC" firstStartedPulling="2026-01-06 14:27:13.110403026 +0000 UTC m=+1651.650090690" lastFinishedPulling="2026-01-06 14:27:14.610036007 +0000 UTC m=+1653.149723671" observedRunningTime="2026-01-06 14:27:15.153393969 +0000 UTC m=+1653.693081723" watchObservedRunningTime="2026-01-06 14:27:15.164192044 +0000 UTC m=+1653.703879738" Jan 06 14:27:17 crc kubenswrapper[4869]: I0106 14:27:17.154280 4869 generic.go:334] "Generic (PLEG): container finished" podID="809af13b-e2f3-4eed-a5dc-9de20cab3ef4" containerID="9140e1c73268098bf770faeae60957604859e22ed806a4d1f7848347b08653b5" exitCode=0 Jan 06 14:27:17 crc kubenswrapper[4869]: I0106 14:27:17.154473 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" event={"ID":"809af13b-e2f3-4eed-a5dc-9de20cab3ef4","Type":"ContainerDied","Data":"9140e1c73268098bf770faeae60957604859e22ed806a4d1f7848347b08653b5"} Jan 06 14:27:17 crc kubenswrapper[4869]: I0106 14:27:17.704835 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:27:17 crc kubenswrapper[4869]: E0106 14:27:17.705977 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.617722 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.650383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm5q5\" (UniqueName: \"kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5\") pod \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.650433 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam\") pod \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.650514 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory\") pod \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\" (UID: \"809af13b-e2f3-4eed-a5dc-9de20cab3ef4\") " Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.657065 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5" (OuterVolumeSpecName: "kube-api-access-hm5q5") pod "809af13b-e2f3-4eed-a5dc-9de20cab3ef4" (UID: "809af13b-e2f3-4eed-a5dc-9de20cab3ef4"). InnerVolumeSpecName "kube-api-access-hm5q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.677952 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory" (OuterVolumeSpecName: "inventory") pod "809af13b-e2f3-4eed-a5dc-9de20cab3ef4" (UID: "809af13b-e2f3-4eed-a5dc-9de20cab3ef4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.682219 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "809af13b-e2f3-4eed-a5dc-9de20cab3ef4" (UID: "809af13b-e2f3-4eed-a5dc-9de20cab3ef4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.752537 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm5q5\" (UniqueName: \"kubernetes.io/projected/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-kube-api-access-hm5q5\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.752582 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:18 crc kubenswrapper[4869]: I0106 14:27:18.752597 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/809af13b-e2f3-4eed-a5dc-9de20cab3ef4-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.037069 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-g8z25"] Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.052529 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-g8z25"] Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.179048 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" event={"ID":"809af13b-e2f3-4eed-a5dc-9de20cab3ef4","Type":"ContainerDied","Data":"5e8bd975a2d69df30de24008aff1a6a9625831fb310f4617990caa63919d4d95"} Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.179099 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e8bd975a2d69df30de24008aff1a6a9625831fb310f4617990caa63919d4d95" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.179168 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.277494 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7"] Jan 06 14:27:19 crc kubenswrapper[4869]: E0106 14:27:19.278210 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809af13b-e2f3-4eed-a5dc-9de20cab3ef4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.278251 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="809af13b-e2f3-4eed-a5dc-9de20cab3ef4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.278631 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="809af13b-e2f3-4eed-a5dc-9de20cab3ef4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.279919 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.282913 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.283117 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.283390 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.283883 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.298626 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7"] Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.468840 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.469119 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.469282 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndft\" (UniqueName: \"kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.571547 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.571721 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.571776 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndft\" (UniqueName: \"kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.576563 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.577572 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.597486 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndft\" (UniqueName: \"kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.601607 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:19 crc kubenswrapper[4869]: I0106 14:27:19.727503 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="136a4263-02a7-48bb-aace-502786258d44" path="/var/lib/kubelet/pods/136a4263-02a7-48bb-aace-502786258d44/volumes" Jan 06 14:27:20 crc kubenswrapper[4869]: I0106 14:27:20.152759 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7"] Jan 06 14:27:20 crc kubenswrapper[4869]: I0106 14:27:20.191879 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" event={"ID":"b05a3066-8fd0-4ce8-be80-4fab4f8c9042","Type":"ContainerStarted","Data":"0b958db2d204232153acc4fad586e3f433ba2eb44c78748ca685f290b1714ec2"} Jan 06 14:27:21 crc kubenswrapper[4869]: I0106 14:27:21.202741 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" event={"ID":"b05a3066-8fd0-4ce8-be80-4fab4f8c9042","Type":"ContainerStarted","Data":"a3692061bd5a107051fbf79c200c38647ee44d578d0d02e8cbcbbd2305ab4114"} Jan 06 14:27:21 crc kubenswrapper[4869]: I0106 14:27:21.245095 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" podStartSLOduration=1.4490731860000001 podStartE2EDuration="2.24506138s" podCreationTimestamp="2026-01-06 14:27:19 +0000 UTC" firstStartedPulling="2026-01-06 14:27:20.161888731 +0000 UTC m=+1658.701576435" lastFinishedPulling="2026-01-06 14:27:20.957876935 +0000 UTC m=+1659.497564629" observedRunningTime="2026-01-06 14:27:21.22911394 +0000 UTC m=+1659.768801654" watchObservedRunningTime="2026-01-06 14:27:21.24506138 +0000 UTC m=+1659.784749064" Jan 06 14:27:22 crc kubenswrapper[4869]: I0106 14:27:22.093734 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:22 crc kubenswrapper[4869]: I0106 14:27:22.094109 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:22 crc kubenswrapper[4869]: I0106 14:27:22.181366 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:22 crc kubenswrapper[4869]: I0106 14:27:22.293317 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:22 crc kubenswrapper[4869]: I0106 14:27:22.436268 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:24 crc kubenswrapper[4869]: I0106 14:27:24.235749 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5jkf" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="registry-server" containerID="cri-o://8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635" gracePeriod=2 Jan 06 14:27:24 crc kubenswrapper[4869]: I0106 14:27:24.928945 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.081466 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndjj6\" (UniqueName: \"kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6\") pod \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.081613 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities\") pod \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.081710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content\") pod \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\" (UID: \"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449\") " Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.083120 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities" (OuterVolumeSpecName: "utilities") pod "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" (UID: "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.090878 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6" (OuterVolumeSpecName: "kube-api-access-ndjj6") pod "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" (UID: "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449"). InnerVolumeSpecName "kube-api-access-ndjj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.121942 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" (UID: "aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.184643 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndjj6\" (UniqueName: \"kubernetes.io/projected/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-kube-api-access-ndjj6\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.184715 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.184726 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.246635 4869 generic.go:334] "Generic (PLEG): container finished" podID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerID="8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635" exitCode=0 Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.246771 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5jkf" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.246816 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerDied","Data":"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635"} Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.246892 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5jkf" event={"ID":"aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449","Type":"ContainerDied","Data":"d8cbed252dbc9f17b47ea3ea86a3e93a050d1b9e8757e9b726f1f380fd31583e"} Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.246914 4869 scope.go:117] "RemoveContainer" containerID="8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.284850 4869 scope.go:117] "RemoveContainer" containerID="fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.293840 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.316563 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5jkf"] Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.330565 4869 scope.go:117] "RemoveContainer" containerID="eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.375824 4869 scope.go:117] "RemoveContainer" containerID="8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635" Jan 06 14:27:25 crc kubenswrapper[4869]: E0106 14:27:25.376705 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635\": container with ID starting with 8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635 not found: ID does not exist" containerID="8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.376763 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635"} err="failed to get container status \"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635\": rpc error: code = NotFound desc = could not find container \"8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635\": container with ID starting with 8b3d61f8bc30eecf6524711b688d8952625563c821b2f2ba1f823ac07fec0635 not found: ID does not exist" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.376788 4869 scope.go:117] "RemoveContainer" containerID="fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b" Jan 06 14:27:25 crc kubenswrapper[4869]: E0106 14:27:25.377433 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b\": container with ID starting with fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b not found: ID does not exist" containerID="fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.377485 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b"} err="failed to get container status \"fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b\": rpc error: code = NotFound desc = could not find container \"fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b\": container with ID starting with fe46c3c344a2a496ecae0ba3a91e42db9aaee512532de441376332a87508c32b not found: ID does not exist" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.377553 4869 scope.go:117] "RemoveContainer" containerID="eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8" Jan 06 14:27:25 crc kubenswrapper[4869]: E0106 14:27:25.377961 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8\": container with ID starting with eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8 not found: ID does not exist" containerID="eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.378034 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8"} err="failed to get container status \"eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8\": rpc error: code = NotFound desc = could not find container \"eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8\": container with ID starting with eacd90310d79532fd85685d4dad507d65c6ecf24e54545c1e6c662f6f9bf73c8 not found: ID does not exist" Jan 06 14:27:25 crc kubenswrapper[4869]: I0106 14:27:25.720174 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" path="/var/lib/kubelet/pods/aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449/volumes" Jan 06 14:27:27 crc kubenswrapper[4869]: I0106 14:27:27.264576 4869 generic.go:334] "Generic (PLEG): container finished" podID="b05a3066-8fd0-4ce8-be80-4fab4f8c9042" containerID="a3692061bd5a107051fbf79c200c38647ee44d578d0d02e8cbcbbd2305ab4114" exitCode=0 Jan 06 14:27:27 crc kubenswrapper[4869]: I0106 14:27:27.264632 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" event={"ID":"b05a3066-8fd0-4ce8-be80-4fab4f8c9042","Type":"ContainerDied","Data":"a3692061bd5a107051fbf79c200c38647ee44d578d0d02e8cbcbbd2305ab4114"} Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.690094 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.854093 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndft\" (UniqueName: \"kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft\") pod \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.854386 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory\") pod \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.854590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam\") pod \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\" (UID: \"b05a3066-8fd0-4ce8-be80-4fab4f8c9042\") " Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.861785 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft" (OuterVolumeSpecName: "kube-api-access-rndft") pod "b05a3066-8fd0-4ce8-be80-4fab4f8c9042" (UID: "b05a3066-8fd0-4ce8-be80-4fab4f8c9042"). InnerVolumeSpecName "kube-api-access-rndft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.888491 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b05a3066-8fd0-4ce8-be80-4fab4f8c9042" (UID: "b05a3066-8fd0-4ce8-be80-4fab4f8c9042"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.891925 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory" (OuterVolumeSpecName: "inventory") pod "b05a3066-8fd0-4ce8-be80-4fab4f8c9042" (UID: "b05a3066-8fd0-4ce8-be80-4fab4f8c9042"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.957191 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.957249 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndft\" (UniqueName: \"kubernetes.io/projected/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-kube-api-access-rndft\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:28 crc kubenswrapper[4869]: I0106 14:27:28.957269 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b05a3066-8fd0-4ce8-be80-4fab4f8c9042-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.283997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" event={"ID":"b05a3066-8fd0-4ce8-be80-4fab4f8c9042","Type":"ContainerDied","Data":"0b958db2d204232153acc4fad586e3f433ba2eb44c78748ca685f290b1714ec2"} Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.284056 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b958db2d204232153acc4fad586e3f433ba2eb44c78748ca685f290b1714ec2" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.284093 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.375254 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5"] Jan 06 14:27:29 crc kubenswrapper[4869]: E0106 14:27:29.375908 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="extract-utilities" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.375940 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="extract-utilities" Jan 06 14:27:29 crc kubenswrapper[4869]: E0106 14:27:29.375961 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="extract-content" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.375976 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="extract-content" Jan 06 14:27:29 crc kubenswrapper[4869]: E0106 14:27:29.376013 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="registry-server" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.376030 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="registry-server" Jan 06 14:27:29 crc kubenswrapper[4869]: E0106 14:27:29.376074 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05a3066-8fd0-4ce8-be80-4fab4f8c9042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.376088 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05a3066-8fd0-4ce8-be80-4fab4f8c9042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.376498 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7c0c9d-ec52-4948-b7ed-8bfee1ce4449" containerName="registry-server" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.376545 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05a3066-8fd0-4ce8-be80-4fab4f8c9042" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.377617 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.379960 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.379960 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.380402 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.380787 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.391726 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5"] Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.571236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.571303 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.571341 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5gt\" (UniqueName: \"kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.673034 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.673104 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb5gt\" (UniqueName: \"kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.673278 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.680109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.687207 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.701368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb5gt\" (UniqueName: \"kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7qjr5\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.703733 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:27:29 crc kubenswrapper[4869]: I0106 14:27:29.706644 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:27:29 crc kubenswrapper[4869]: E0106 14:27:29.707822 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:27:30 crc kubenswrapper[4869]: W0106 14:27:30.305204 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f450f40_ddac_47b1_b571_35d3c04fdcfc.slice/crio-cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11 WatchSource:0}: Error finding container cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11: Status 404 returned error can't find the container with id cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11 Jan 06 14:27:30 crc kubenswrapper[4869]: I0106 14:27:30.309842 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5"] Jan 06 14:27:31 crc kubenswrapper[4869]: I0106 14:27:31.308953 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" event={"ID":"8f450f40-ddac-47b1-b571-35d3c04fdcfc","Type":"ContainerStarted","Data":"2a9d64a12fd3b35e4b2549e66b98f96a3c3184c64f588b0df588a4ba75bf2048"} Jan 06 14:27:31 crc kubenswrapper[4869]: I0106 14:27:31.309399 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" event={"ID":"8f450f40-ddac-47b1-b571-35d3c04fdcfc","Type":"ContainerStarted","Data":"cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11"} Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.344334 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" podStartSLOduration=4.849515317 podStartE2EDuration="5.344318263s" podCreationTimestamp="2026-01-06 14:27:29 +0000 UTC" firstStartedPulling="2026-01-06 14:27:30.30806444 +0000 UTC m=+1668.847752104" lastFinishedPulling="2026-01-06 14:27:30.802867386 +0000 UTC m=+1669.342555050" observedRunningTime="2026-01-06 14:27:31.329798407 +0000 UTC m=+1669.869486071" watchObservedRunningTime="2026-01-06 14:27:34.344318263 +0000 UTC m=+1672.884005927" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.348459 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.350472 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.362746 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.385181 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.385451 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm9wl\" (UniqueName: \"kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.385755 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.486752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.486849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.486894 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm9wl\" (UniqueName: \"kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.487429 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.487477 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.525749 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm9wl\" (UniqueName: \"kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl\") pod \"community-operators-gtmq5\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:34 crc kubenswrapper[4869]: I0106 14:27:34.673155 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:35 crc kubenswrapper[4869]: I0106 14:27:35.029892 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-v7q9b"] Jan 06 14:27:35 crc kubenswrapper[4869]: I0106 14:27:35.039442 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-v7q9b"] Jan 06 14:27:35 crc kubenswrapper[4869]: I0106 14:27:35.189366 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:35 crc kubenswrapper[4869]: I0106 14:27:35.359603 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerStarted","Data":"35e8474a8e9c7c23f88413a22a07f4b99bd9aa60f59bbd2297e330af43dd3bae"} Jan 06 14:27:35 crc kubenswrapper[4869]: I0106 14:27:35.722380 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f215a91-c46f-447f-b277-362b4d419ed5" path="/var/lib/kubelet/pods/8f215a91-c46f-447f-b277-362b4d419ed5/volumes" Jan 06 14:27:36 crc kubenswrapper[4869]: I0106 14:27:36.392626 4869 generic.go:334] "Generic (PLEG): container finished" podID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerID="22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5" exitCode=0 Jan 06 14:27:36 crc kubenswrapper[4869]: I0106 14:27:36.392777 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerDied","Data":"22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5"} Jan 06 14:27:38 crc kubenswrapper[4869]: I0106 14:27:38.412040 4869 generic.go:334] "Generic (PLEG): container finished" podID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerID="5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f" exitCode=0 Jan 06 14:27:38 crc kubenswrapper[4869]: I0106 14:27:38.412114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerDied","Data":"5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f"} Jan 06 14:27:39 crc kubenswrapper[4869]: I0106 14:27:39.425600 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerStarted","Data":"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27"} Jan 06 14:27:39 crc kubenswrapper[4869]: I0106 14:27:39.454551 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtmq5" podStartSLOduration=2.793303695 podStartE2EDuration="5.454522969s" podCreationTimestamp="2026-01-06 14:27:34 +0000 UTC" firstStartedPulling="2026-01-06 14:27:36.397370753 +0000 UTC m=+1674.937058447" lastFinishedPulling="2026-01-06 14:27:39.058590057 +0000 UTC m=+1677.598277721" observedRunningTime="2026-01-06 14:27:39.441190394 +0000 UTC m=+1677.980878098" watchObservedRunningTime="2026-01-06 14:27:39.454522969 +0000 UTC m=+1677.994210673" Jan 06 14:27:41 crc kubenswrapper[4869]: I0106 14:27:41.713267 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:27:41 crc kubenswrapper[4869]: E0106 14:27:41.713896 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:27:44 crc kubenswrapper[4869]: I0106 14:27:44.673336 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:44 crc kubenswrapper[4869]: I0106 14:27:44.674256 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:44 crc kubenswrapper[4869]: I0106 14:27:44.761522 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:45 crc kubenswrapper[4869]: I0106 14:27:45.560073 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:45 crc kubenswrapper[4869]: I0106 14:27:45.611468 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:47 crc kubenswrapper[4869]: I0106 14:27:47.514703 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtmq5" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="registry-server" containerID="cri-o://eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27" gracePeriod=2 Jan 06 14:27:47 crc kubenswrapper[4869]: I0106 14:27:47.996632 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.051498 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2hhw5"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.062132 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-0d3a-account-create-update-ht79k"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.075473 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-d360-account-create-update-mj76v"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.083166 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-85w6r"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.091054 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-0d3a-account-create-update-ht79k"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.097921 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2hhw5"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.107549 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-85w6r"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.114495 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-d360-account-create-update-mj76v"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.161027 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm9wl\" (UniqueName: \"kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl\") pod \"4522a11f-2ca8-4fa4-a840-7063978af7e2\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.161126 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content\") pod \"4522a11f-2ca8-4fa4-a840-7063978af7e2\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.161153 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities\") pod \"4522a11f-2ca8-4fa4-a840-7063978af7e2\" (UID: \"4522a11f-2ca8-4fa4-a840-7063978af7e2\") " Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.162545 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities" (OuterVolumeSpecName: "utilities") pod "4522a11f-2ca8-4fa4-a840-7063978af7e2" (UID: "4522a11f-2ca8-4fa4-a840-7063978af7e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.168883 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl" (OuterVolumeSpecName: "kube-api-access-gm9wl") pod "4522a11f-2ca8-4fa4-a840-7063978af7e2" (UID: "4522a11f-2ca8-4fa4-a840-7063978af7e2"). InnerVolumeSpecName "kube-api-access-gm9wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.216125 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4522a11f-2ca8-4fa4-a840-7063978af7e2" (UID: "4522a11f-2ca8-4fa4-a840-7063978af7e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.262923 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm9wl\" (UniqueName: \"kubernetes.io/projected/4522a11f-2ca8-4fa4-a840-7063978af7e2-kube-api-access-gm9wl\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.262955 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.262967 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4522a11f-2ca8-4fa4-a840-7063978af7e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.527002 4869 generic.go:334] "Generic (PLEG): container finished" podID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerID="eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27" exitCode=0 Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.527046 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerDied","Data":"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27"} Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.527068 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmq5" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.527086 4869 scope.go:117] "RemoveContainer" containerID="eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.527073 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmq5" event={"ID":"4522a11f-2ca8-4fa4-a840-7063978af7e2","Type":"ContainerDied","Data":"35e8474a8e9c7c23f88413a22a07f4b99bd9aa60f59bbd2297e330af43dd3bae"} Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.559441 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.562921 4869 scope.go:117] "RemoveContainer" containerID="5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.567672 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtmq5"] Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.583861 4869 scope.go:117] "RemoveContainer" containerID="22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.619123 4869 scope.go:117] "RemoveContainer" containerID="eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27" Jan 06 14:27:48 crc kubenswrapper[4869]: E0106 14:27:48.619499 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27\": container with ID starting with eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27 not found: ID does not exist" containerID="eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.619583 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27"} err="failed to get container status \"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27\": rpc error: code = NotFound desc = could not find container \"eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27\": container with ID starting with eb773bb7ad363247952d8d8a5bb96a57a4968c2f90d274aabbe55534ae995c27 not found: ID does not exist" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.619649 4869 scope.go:117] "RemoveContainer" containerID="5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f" Jan 06 14:27:48 crc kubenswrapper[4869]: E0106 14:27:48.620035 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f\": container with ID starting with 5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f not found: ID does not exist" containerID="5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.620075 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f"} err="failed to get container status \"5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f\": rpc error: code = NotFound desc = could not find container \"5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f\": container with ID starting with 5dd449b6570ca641f3ae9e2df734159388c93fe2b9f840a8f69db27085dadd3f not found: ID does not exist" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.620102 4869 scope.go:117] "RemoveContainer" containerID="22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5" Jan 06 14:27:48 crc kubenswrapper[4869]: E0106 14:27:48.620520 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5\": container with ID starting with 22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5 not found: ID does not exist" containerID="22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5" Jan 06 14:27:48 crc kubenswrapper[4869]: I0106 14:27:48.620542 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5"} err="failed to get container status \"22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5\": rpc error: code = NotFound desc = could not find container \"22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5\": container with ID starting with 22e7d1f1ccfc8a62c4067180578ae2107ba08a63faf3a933cb564a1f930a36e5 not found: ID does not exist" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.040040 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-454e-account-create-update-kvssd"] Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.050817 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xzm5r"] Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.060095 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-454e-account-create-update-kvssd"] Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.069166 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xzm5r"] Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.720843 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bc8c590-2b3d-47a0-ada1-029b0d12210d" path="/var/lib/kubelet/pods/1bc8c590-2b3d-47a0-ada1-029b0d12210d/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.725749 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ae2e09-f75f-4bb9-927d-6b0aba81872f" path="/var/lib/kubelet/pods/42ae2e09-f75f-4bb9-927d-6b0aba81872f/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.726449 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" path="/var/lib/kubelet/pods/4522a11f-2ca8-4fa4-a840-7063978af7e2/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.727353 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6602c6-cd27-4d18-91d8-47d0eb285a52" path="/var/lib/kubelet/pods/6e6602c6-cd27-4d18-91d8-47d0eb285a52/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.728756 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74a40f5-6fe0-406f-bf62-6d643e7f7f22" path="/var/lib/kubelet/pods/c74a40f5-6fe0-406f-bf62-6d643e7f7f22/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.729445 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6589777-9306-4d6c-9c5a-ae0961448cb9" path="/var/lib/kubelet/pods/d6589777-9306-4d6c-9c5a-ae0961448cb9/volumes" Jan 06 14:27:49 crc kubenswrapper[4869]: I0106 14:27:49.730167 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9cd7d4-55b8-4008-9b37-040142576d79" path="/var/lib/kubelet/pods/fc9cd7d4-55b8-4008-9b37-040142576d79/volumes" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.490615 4869 scope.go:117] "RemoveContainer" containerID="85ea616470a2cded040cc3cc651c200b3daee0e6fe3688dbacef53871f1896c7" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.523972 4869 scope.go:117] "RemoveContainer" containerID="230d35f321a66e06386c61ba26dc9521f1cc90936ab181595a41774657300e0a" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.589412 4869 scope.go:117] "RemoveContainer" containerID="c6ac9d4abdc21b306e280d469f81737bfa0512e12d41888f8b5594a635716d9e" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.616276 4869 scope.go:117] "RemoveContainer" containerID="dafb7fc434f608e57d36a32efeb35d7b7799fcd2c32a62cde1342cb35702bfa0" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.658960 4869 scope.go:117] "RemoveContainer" containerID="54a396271a91ca6e7b7bbda20013274012e26713c6c64fc565b461cc97e02461" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.688977 4869 scope.go:117] "RemoveContainer" containerID="229d41613f2748f977f52a4d2b9ef2229decbbb63605642b3b2df09f54ffed8d" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.732752 4869 scope.go:117] "RemoveContainer" containerID="310fbcb3336336496ef32b3268ae9dbf1d5744d85dcdf4d47be24a4461d51341" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.761420 4869 scope.go:117] "RemoveContainer" containerID="6132bf5750c82f4376a8a5e267091c41ee4b713dd2b2e6ce5eed7b2e686b84ea" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.794246 4869 scope.go:117] "RemoveContainer" containerID="f92e742ecc0cd2df0c935dfec5aaa06b33fac16255dd5c592538254db7f66cc9" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.823256 4869 scope.go:117] "RemoveContainer" containerID="592690da299cf60315705702f7cb4dde9dd7b6ba626ada6033397602ad2f141a" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.841174 4869 scope.go:117] "RemoveContainer" containerID="b8fef457c44e99821bf3aecbd4c9c5499c3a7d49ad776bef5684808fbf6115ad" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.860840 4869 scope.go:117] "RemoveContainer" containerID="4bc873c18e0717ac68ad8de97820d45dc0ec7156feeac4e0aef3f75cd60949f7" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.892522 4869 scope.go:117] "RemoveContainer" containerID="e43d187ad82cbed7e0da120fc4fde25249907a8cc6ac648cf927ca96a74eb96e" Jan 06 14:27:52 crc kubenswrapper[4869]: I0106 14:27:52.912920 4869 scope.go:117] "RemoveContainer" containerID="403154abd9d50451f7632fb702543fb1a38aee0d3e6c89082ce9926fd905d6b5" Jan 06 14:27:55 crc kubenswrapper[4869]: I0106 14:27:55.706091 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:27:55 crc kubenswrapper[4869]: E0106 14:27:55.707046 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:27:58 crc kubenswrapper[4869]: I0106 14:27:58.034844 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-k4szp"] Jan 06 14:27:58 crc kubenswrapper[4869]: I0106 14:27:58.050649 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-k4szp"] Jan 06 14:27:59 crc kubenswrapper[4869]: I0106 14:27:59.716259 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23" path="/var/lib/kubelet/pods/2e8e7f91-47e8-4ca2-a5a0-9284d7f52d23/volumes" Jan 06 14:28:10 crc kubenswrapper[4869]: I0106 14:28:10.704795 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:28:10 crc kubenswrapper[4869]: E0106 14:28:10.706992 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:28:15 crc kubenswrapper[4869]: I0106 14:28:15.836067 4869 generic.go:334] "Generic (PLEG): container finished" podID="8f450f40-ddac-47b1-b571-35d3c04fdcfc" containerID="2a9d64a12fd3b35e4b2549e66b98f96a3c3184c64f588b0df588a4ba75bf2048" exitCode=0 Jan 06 14:28:15 crc kubenswrapper[4869]: I0106 14:28:15.836152 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" event={"ID":"8f450f40-ddac-47b1-b571-35d3c04fdcfc","Type":"ContainerDied","Data":"2a9d64a12fd3b35e4b2549e66b98f96a3c3184c64f588b0df588a4ba75bf2048"} Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.386580 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.419079 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory\") pod \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.419257 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam\") pod \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.419318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5gt\" (UniqueName: \"kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt\") pod \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\" (UID: \"8f450f40-ddac-47b1-b571-35d3c04fdcfc\") " Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.427470 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt" (OuterVolumeSpecName: "kube-api-access-hb5gt") pod "8f450f40-ddac-47b1-b571-35d3c04fdcfc" (UID: "8f450f40-ddac-47b1-b571-35d3c04fdcfc"). InnerVolumeSpecName "kube-api-access-hb5gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.450344 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f450f40-ddac-47b1-b571-35d3c04fdcfc" (UID: "8f450f40-ddac-47b1-b571-35d3c04fdcfc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.471700 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory" (OuterVolumeSpecName: "inventory") pod "8f450f40-ddac-47b1-b571-35d3c04fdcfc" (UID: "8f450f40-ddac-47b1-b571-35d3c04fdcfc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.521564 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.521634 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f450f40-ddac-47b1-b571-35d3c04fdcfc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.521655 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb5gt\" (UniqueName: \"kubernetes.io/projected/8f450f40-ddac-47b1-b571-35d3c04fdcfc-kube-api-access-hb5gt\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.864744 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" event={"ID":"8f450f40-ddac-47b1-b571-35d3c04fdcfc","Type":"ContainerDied","Data":"cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11"} Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.864866 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdae7ae0bccd1b56262b58aaf4179c26325820c86442935a01547ad57fc27a11" Jan 06 14:28:17 crc kubenswrapper[4869]: I0106 14:28:17.864800 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.000943 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt"] Jan 06 14:28:18 crc kubenswrapper[4869]: E0106 14:28:18.001904 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="extract-content" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.001920 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="extract-content" Jan 06 14:28:18 crc kubenswrapper[4869]: E0106 14:28:18.001931 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="registry-server" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.001940 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="registry-server" Jan 06 14:28:18 crc kubenswrapper[4869]: E0106 14:28:18.001960 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f450f40-ddac-47b1-b571-35d3c04fdcfc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.001969 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f450f40-ddac-47b1-b571-35d3c04fdcfc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:18 crc kubenswrapper[4869]: E0106 14:28:18.002010 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="extract-utilities" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.002018 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="extract-utilities" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.002234 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f450f40-ddac-47b1-b571-35d3c04fdcfc" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.002263 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="4522a11f-2ca8-4fa4-a840-7063978af7e2" containerName="registry-server" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.003063 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.006301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.010408 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.010715 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.011427 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.017702 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt"] Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.037083 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.037221 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.037290 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w925\" (UniqueName: \"kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.139614 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.139770 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.139895 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w925\" (UniqueName: \"kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.147358 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.148708 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.170156 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w925\" (UniqueName: \"kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.335227 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:18 crc kubenswrapper[4869]: I0106 14:28:18.869759 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt"] Jan 06 14:28:19 crc kubenswrapper[4869]: I0106 14:28:19.893623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" event={"ID":"1f91c71f-6aed-457a-a9dc-29501d415575","Type":"ContainerStarted","Data":"1a483f8e504a483407d023d5a2e5f4bf7ff2a885b53c3cd015ab222c33a525d8"} Jan 06 14:28:19 crc kubenswrapper[4869]: I0106 14:28:19.894701 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" event={"ID":"1f91c71f-6aed-457a-a9dc-29501d415575","Type":"ContainerStarted","Data":"cd761982d71765ab9c4263bb982bbcefbc3d9a85e3b60c1633bf223b27864ac6"} Jan 06 14:28:19 crc kubenswrapper[4869]: I0106 14:28:19.931301 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" podStartSLOduration=2.324732722 podStartE2EDuration="2.931277858s" podCreationTimestamp="2026-01-06 14:28:17 +0000 UTC" firstStartedPulling="2026-01-06 14:28:18.876109264 +0000 UTC m=+1717.415796928" lastFinishedPulling="2026-01-06 14:28:19.4826544 +0000 UTC m=+1718.022342064" observedRunningTime="2026-01-06 14:28:19.914719803 +0000 UTC m=+1718.454407487" watchObservedRunningTime="2026-01-06 14:28:19.931277858 +0000 UTC m=+1718.470965522" Jan 06 14:28:24 crc kubenswrapper[4869]: I0106 14:28:24.939993 4869 generic.go:334] "Generic (PLEG): container finished" podID="1f91c71f-6aed-457a-a9dc-29501d415575" containerID="1a483f8e504a483407d023d5a2e5f4bf7ff2a885b53c3cd015ab222c33a525d8" exitCode=0 Jan 06 14:28:24 crc kubenswrapper[4869]: I0106 14:28:24.940142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" event={"ID":"1f91c71f-6aed-457a-a9dc-29501d415575","Type":"ContainerDied","Data":"1a483f8e504a483407d023d5a2e5f4bf7ff2a885b53c3cd015ab222c33a525d8"} Jan 06 14:28:25 crc kubenswrapper[4869]: I0106 14:28:25.704708 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:28:25 crc kubenswrapper[4869]: E0106 14:28:25.705484 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.392742 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.515432 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam\") pod \"1f91c71f-6aed-457a-a9dc-29501d415575\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.515526 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory\") pod \"1f91c71f-6aed-457a-a9dc-29501d415575\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.515653 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w925\" (UniqueName: \"kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925\") pod \"1f91c71f-6aed-457a-a9dc-29501d415575\" (UID: \"1f91c71f-6aed-457a-a9dc-29501d415575\") " Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.523016 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925" (OuterVolumeSpecName: "kube-api-access-8w925") pod "1f91c71f-6aed-457a-a9dc-29501d415575" (UID: "1f91c71f-6aed-457a-a9dc-29501d415575"). InnerVolumeSpecName "kube-api-access-8w925". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.549112 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory" (OuterVolumeSpecName: "inventory") pod "1f91c71f-6aed-457a-a9dc-29501d415575" (UID: "1f91c71f-6aed-457a-a9dc-29501d415575"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.561979 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1f91c71f-6aed-457a-a9dc-29501d415575" (UID: "1f91c71f-6aed-457a-a9dc-29501d415575"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.618544 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.618583 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1f91c71f-6aed-457a-a9dc-29501d415575-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.618596 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w925\" (UniqueName: \"kubernetes.io/projected/1f91c71f-6aed-457a-a9dc-29501d415575-kube-api-access-8w925\") on node \"crc\" DevicePath \"\"" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.981271 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" event={"ID":"1f91c71f-6aed-457a-a9dc-29501d415575","Type":"ContainerDied","Data":"cd761982d71765ab9c4263bb982bbcefbc3d9a85e3b60c1633bf223b27864ac6"} Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.981353 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd761982d71765ab9c4263bb982bbcefbc3d9a85e3b60c1633bf223b27864ac6" Jan 06 14:28:26 crc kubenswrapper[4869]: I0106 14:28:26.981472 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.116287 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx"] Jan 06 14:28:27 crc kubenswrapper[4869]: E0106 14:28:27.117167 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f91c71f-6aed-457a-a9dc-29501d415575" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.117203 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f91c71f-6aed-457a-a9dc-29501d415575" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.117779 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f91c71f-6aed-457a-a9dc-29501d415575" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.121802 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.127228 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.127524 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.127649 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.136251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx"] Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.138979 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.230792 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.230870 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdw4\" (UniqueName: \"kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.230924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.333686 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.333782 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdw4\" (UniqueName: \"kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.333847 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.337699 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.337910 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.361253 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdw4\" (UniqueName: \"kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:27 crc kubenswrapper[4869]: I0106 14:28:27.460783 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:28:28 crc kubenswrapper[4869]: I0106 14:28:28.004998 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx"] Jan 06 14:28:29 crc kubenswrapper[4869]: I0106 14:28:29.004906 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" event={"ID":"200765a0-cf26-4e96-bee0-30dd911e7576","Type":"ContainerStarted","Data":"6146b3f7d9edd3f621fa613299e24db32247b1f424e3896d6bdc33902f806203"} Jan 06 14:28:29 crc kubenswrapper[4869]: I0106 14:28:29.005773 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" event={"ID":"200765a0-cf26-4e96-bee0-30dd911e7576","Type":"ContainerStarted","Data":"2af273fe249fff5c5578e63aec49e7b987ef9cbdc075024a4ac3f879c0337f0f"} Jan 06 14:28:29 crc kubenswrapper[4869]: I0106 14:28:29.023583 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" podStartSLOduration=1.521928339 podStartE2EDuration="2.023567833s" podCreationTimestamp="2026-01-06 14:28:27 +0000 UTC" firstStartedPulling="2026-01-06 14:28:28.023076244 +0000 UTC m=+1726.562763928" lastFinishedPulling="2026-01-06 14:28:28.524715758 +0000 UTC m=+1727.064403422" observedRunningTime="2026-01-06 14:28:29.01810168 +0000 UTC m=+1727.557789344" watchObservedRunningTime="2026-01-06 14:28:29.023567833 +0000 UTC m=+1727.563255497" Jan 06 14:28:36 crc kubenswrapper[4869]: I0106 14:28:36.704885 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:28:36 crc kubenswrapper[4869]: E0106 14:28:36.706285 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.063483 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-xbh9m"] Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.079905 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nzc44"] Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.095831 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-xbh9m"] Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.106557 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-d94xt"] Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.116115 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nzc44"] Jan 06 14:28:38 crc kubenswrapper[4869]: I0106 14:28:38.127288 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-d94xt"] Jan 06 14:28:39 crc kubenswrapper[4869]: I0106 14:28:39.721531 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6c4b71-32a5-473c-bdbb-d23acccaf5a3" path="/var/lib/kubelet/pods/1f6c4b71-32a5-473c-bdbb-d23acccaf5a3/volumes" Jan 06 14:28:39 crc kubenswrapper[4869]: I0106 14:28:39.723755 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f635641-cd18-4d1b-a2a6-80a4b4b0697b" path="/var/lib/kubelet/pods/4f635641-cd18-4d1b-a2a6-80a4b4b0697b/volumes" Jan 06 14:28:39 crc kubenswrapper[4869]: I0106 14:28:39.724654 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f5335d-50bb-4886-a562-e6ff443fb449" path="/var/lib/kubelet/pods/c7f5335d-50bb-4886-a562-e6ff443fb449/volumes" Jan 06 14:28:48 crc kubenswrapper[4869]: I0106 14:28:48.704256 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:28:48 crc kubenswrapper[4869]: E0106 14:28:48.705018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.050332 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-fw77s"] Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.065036 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-fw77s"] Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.210439 4869 scope.go:117] "RemoveContainer" containerID="427e2d69771491464afb0a969ba9d62750b64a3eec599774401f8e0c508e1865" Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.257199 4869 scope.go:117] "RemoveContainer" containerID="cb6a340dd7a9247b368f52244901abb9f10b008d7f909ca03ef408225b756f21" Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.355464 4869 scope.go:117] "RemoveContainer" containerID="508bae0d5905ac1881be1320d226acd6eaba11ee012e6b14f7aeef1db91afa65" Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.385546 4869 scope.go:117] "RemoveContainer" containerID="b5670d758ec2da20f5fbeff760a97965b5c2917bfdcb1a4c4ed32d04db93fcc3" Jan 06 14:28:53 crc kubenswrapper[4869]: I0106 14:28:53.721767 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64424807-a383-4509-a25c-947f73a29e64" path="/var/lib/kubelet/pods/64424807-a383-4509-a25c-947f73a29e64/volumes" Jan 06 14:28:54 crc kubenswrapper[4869]: I0106 14:28:54.033096 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5qp9n"] Jan 06 14:28:54 crc kubenswrapper[4869]: I0106 14:28:54.042380 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5qp9n"] Jan 06 14:28:55 crc kubenswrapper[4869]: I0106 14:28:55.726186 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5324a677-1d17-4031-ace1-8fc98bc58f9d" path="/var/lib/kubelet/pods/5324a677-1d17-4031-ace1-8fc98bc58f9d/volumes" Jan 06 14:29:00 crc kubenswrapper[4869]: I0106 14:29:00.705112 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:29:00 crc kubenswrapper[4869]: E0106 14:29:00.706495 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:29:15 crc kubenswrapper[4869]: I0106 14:29:15.707058 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:29:15 crc kubenswrapper[4869]: E0106 14:29:15.708138 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:29:26 crc kubenswrapper[4869]: I0106 14:29:26.704568 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:29:26 crc kubenswrapper[4869]: E0106 14:29:26.705709 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:29:27 crc kubenswrapper[4869]: I0106 14:29:27.046850 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-97dc-account-create-update-sslwl"] Jan 06 14:29:27 crc kubenswrapper[4869]: I0106 14:29:27.057852 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-97dc-account-create-update-sslwl"] Jan 06 14:29:27 crc kubenswrapper[4869]: I0106 14:29:27.726879 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0088081d-47a5-4616-9c0a-36934cb45b2a" path="/var/lib/kubelet/pods/0088081d-47a5-4616-9c0a-36934cb45b2a/volumes" Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.041776 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-thdfz"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.055348 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1885-account-create-update-7g8w2"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.070111 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-thdfz"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.078295 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6e3c-account-create-update-4bhj5"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.084947 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q9kvc"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.091143 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xp2nl"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.097480 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1885-account-create-update-7g8w2"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.104492 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q9kvc"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.111921 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xp2nl"] Jan 06 14:29:28 crc kubenswrapper[4869]: I0106 14:29:28.121051 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6e3c-account-create-update-4bhj5"] Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.701898 4869 generic.go:334] "Generic (PLEG): container finished" podID="200765a0-cf26-4e96-bee0-30dd911e7576" containerID="6146b3f7d9edd3f621fa613299e24db32247b1f424e3896d6bdc33902f806203" exitCode=0 Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.701963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" event={"ID":"200765a0-cf26-4e96-bee0-30dd911e7576","Type":"ContainerDied","Data":"6146b3f7d9edd3f621fa613299e24db32247b1f424e3896d6bdc33902f806203"} Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.724351 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f95a5bc-df02-4c08-bd3d-fbc4faa9db21" path="/var/lib/kubelet/pods/3f95a5bc-df02-4c08-bd3d-fbc4faa9db21/volumes" Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.724941 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a92a488-1e14-45e0-9dc7-c09605d26de5" path="/var/lib/kubelet/pods/8a92a488-1e14-45e0-9dc7-c09605d26de5/volumes" Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.725498 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93537664-809a-4f60-add8-bccd7a8b08a2" path="/var/lib/kubelet/pods/93537664-809a-4f60-add8-bccd7a8b08a2/volumes" Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.726143 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d99944-ef47-4a37-b27b-b68ee2aafa99" path="/var/lib/kubelet/pods/a0d99944-ef47-4a37-b27b-b68ee2aafa99/volumes" Jan 06 14:29:29 crc kubenswrapper[4869]: I0106 14:29:29.728649 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf1c9b2b-a06d-40c4-8471-246a2041fa96" path="/var/lib/kubelet/pods/cf1c9b2b-a06d-40c4-8471-246a2041fa96/volumes" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.256289 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.408357 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam\") pod \"200765a0-cf26-4e96-bee0-30dd911e7576\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.408455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory\") pod \"200765a0-cf26-4e96-bee0-30dd911e7576\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.408884 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcdw4\" (UniqueName: \"kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4\") pod \"200765a0-cf26-4e96-bee0-30dd911e7576\" (UID: \"200765a0-cf26-4e96-bee0-30dd911e7576\") " Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.416934 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4" (OuterVolumeSpecName: "kube-api-access-wcdw4") pod "200765a0-cf26-4e96-bee0-30dd911e7576" (UID: "200765a0-cf26-4e96-bee0-30dd911e7576"). InnerVolumeSpecName "kube-api-access-wcdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.433983 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "200765a0-cf26-4e96-bee0-30dd911e7576" (UID: "200765a0-cf26-4e96-bee0-30dd911e7576"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.437196 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory" (OuterVolumeSpecName: "inventory") pod "200765a0-cf26-4e96-bee0-30dd911e7576" (UID: "200765a0-cf26-4e96-bee0-30dd911e7576"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.510925 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.510957 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200765a0-cf26-4e96-bee0-30dd911e7576-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.510967 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcdw4\" (UniqueName: \"kubernetes.io/projected/200765a0-cf26-4e96-bee0-30dd911e7576-kube-api-access-wcdw4\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.729103 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" event={"ID":"200765a0-cf26-4e96-bee0-30dd911e7576","Type":"ContainerDied","Data":"2af273fe249fff5c5578e63aec49e7b987ef9cbdc075024a4ac3f879c0337f0f"} Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.729509 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2af273fe249fff5c5578e63aec49e7b987ef9cbdc075024a4ac3f879c0337f0f" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.729178 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.849770 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wk646"] Jan 06 14:29:31 crc kubenswrapper[4869]: E0106 14:29:31.850461 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200765a0-cf26-4e96-bee0-30dd911e7576" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.850497 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="200765a0-cf26-4e96-bee0-30dd911e7576" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.850877 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="200765a0-cf26-4e96-bee0-30dd911e7576" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.852154 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.856221 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.856522 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.856827 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.857079 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:29:31 crc kubenswrapper[4869]: I0106 14:29:31.859826 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wk646"] Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.021501 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.021810 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.022868 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgbg\" (UniqueName: \"kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.125010 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.125192 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.126550 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shgbg\" (UniqueName: \"kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.131153 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.132005 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.156910 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shgbg\" (UniqueName: \"kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg\") pod \"ssh-known-hosts-edpm-deployment-wk646\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.173836 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:32 crc kubenswrapper[4869]: I0106 14:29:32.835717 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wk646"] Jan 06 14:29:32 crc kubenswrapper[4869]: W0106 14:29:32.844205 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83886cc7_f2ac_4f61_bf3e_18eee999fae1.slice/crio-5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d WatchSource:0}: Error finding container 5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d: Status 404 returned error can't find the container with id 5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d Jan 06 14:29:33 crc kubenswrapper[4869]: I0106 14:29:33.751911 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" event={"ID":"83886cc7-f2ac-4f61-bf3e-18eee999fae1","Type":"ContainerStarted","Data":"351e7c373729b88778d24b639e18c87b1d6846663e808d9d931391ccfd1de8ef"} Jan 06 14:29:33 crc kubenswrapper[4869]: I0106 14:29:33.752404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" event={"ID":"83886cc7-f2ac-4f61-bf3e-18eee999fae1","Type":"ContainerStarted","Data":"5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d"} Jan 06 14:29:33 crc kubenswrapper[4869]: I0106 14:29:33.781005 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" podStartSLOduration=2.201610798 podStartE2EDuration="2.78098282s" podCreationTimestamp="2026-01-06 14:29:31 +0000 UTC" firstStartedPulling="2026-01-06 14:29:32.853477274 +0000 UTC m=+1791.393164938" lastFinishedPulling="2026-01-06 14:29:33.432849296 +0000 UTC m=+1791.972536960" observedRunningTime="2026-01-06 14:29:33.773067226 +0000 UTC m=+1792.312754890" watchObservedRunningTime="2026-01-06 14:29:33.78098282 +0000 UTC m=+1792.320670494" Jan 06 14:29:41 crc kubenswrapper[4869]: I0106 14:29:41.711865 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:29:41 crc kubenswrapper[4869]: E0106 14:29:41.713062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:29:41 crc kubenswrapper[4869]: I0106 14:29:41.837365 4869 generic.go:334] "Generic (PLEG): container finished" podID="83886cc7-f2ac-4f61-bf3e-18eee999fae1" containerID="351e7c373729b88778d24b639e18c87b1d6846663e808d9d931391ccfd1de8ef" exitCode=0 Jan 06 14:29:41 crc kubenswrapper[4869]: I0106 14:29:41.837440 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" event={"ID":"83886cc7-f2ac-4f61-bf3e-18eee999fae1","Type":"ContainerDied","Data":"351e7c373729b88778d24b639e18c87b1d6846663e808d9d931391ccfd1de8ef"} Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.287731 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.356185 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0\") pod \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.356230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shgbg\" (UniqueName: \"kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg\") pod \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.356318 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam\") pod \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\" (UID: \"83886cc7-f2ac-4f61-bf3e-18eee999fae1\") " Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.365611 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg" (OuterVolumeSpecName: "kube-api-access-shgbg") pod "83886cc7-f2ac-4f61-bf3e-18eee999fae1" (UID: "83886cc7-f2ac-4f61-bf3e-18eee999fae1"). InnerVolumeSpecName "kube-api-access-shgbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.387949 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "83886cc7-f2ac-4f61-bf3e-18eee999fae1" (UID: "83886cc7-f2ac-4f61-bf3e-18eee999fae1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.391415 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "83886cc7-f2ac-4f61-bf3e-18eee999fae1" (UID: "83886cc7-f2ac-4f61-bf3e-18eee999fae1"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.458022 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.458062 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shgbg\" (UniqueName: \"kubernetes.io/projected/83886cc7-f2ac-4f61-bf3e-18eee999fae1-kube-api-access-shgbg\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.458072 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/83886cc7-f2ac-4f61-bf3e-18eee999fae1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.873715 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" event={"ID":"83886cc7-f2ac-4f61-bf3e-18eee999fae1","Type":"ContainerDied","Data":"5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d"} Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.874156 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f46e9933c431953a1e01caf64dae39c534fb19684746bc655f73c89647cbf0d" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.875839 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wk646" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.955709 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l"] Jan 06 14:29:43 crc kubenswrapper[4869]: E0106 14:29:43.956155 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83886cc7-f2ac-4f61-bf3e-18eee999fae1" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.956179 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="83886cc7-f2ac-4f61-bf3e-18eee999fae1" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.956356 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="83886cc7-f2ac-4f61-bf3e-18eee999fae1" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.957188 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.959621 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.959912 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.959952 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.960083 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:29:43 crc kubenswrapper[4869]: I0106 14:29:43.986019 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l"] Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.075811 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.075911 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.075975 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qhf\" (UniqueName: \"kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.177656 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86qhf\" (UniqueName: \"kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.177793 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.177840 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.182326 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.184713 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.195456 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86qhf\" (UniqueName: \"kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-s6b4l\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.288455 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.813635 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l"] Jan 06 14:29:44 crc kubenswrapper[4869]: W0106 14:29:44.816649 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffefabbd_c814_4a05_a557_95e0e07664f1.slice/crio-65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5 WatchSource:0}: Error finding container 65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5: Status 404 returned error can't find the container with id 65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5 Jan 06 14:29:44 crc kubenswrapper[4869]: I0106 14:29:44.882398 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" event={"ID":"ffefabbd-c814-4a05-a557-95e0e07664f1","Type":"ContainerStarted","Data":"65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5"} Jan 06 14:29:45 crc kubenswrapper[4869]: I0106 14:29:45.895452 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" event={"ID":"ffefabbd-c814-4a05-a557-95e0e07664f1","Type":"ContainerStarted","Data":"b2cf481f27b26951ff65ad884bec0a9db206e9e72f69cce25115dfb74ecf6d2a"} Jan 06 14:29:45 crc kubenswrapper[4869]: I0106 14:29:45.917517 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" podStartSLOduration=2.5066166 podStartE2EDuration="2.917492057s" podCreationTimestamp="2026-01-06 14:29:43 +0000 UTC" firstStartedPulling="2026-01-06 14:29:44.819408433 +0000 UTC m=+1803.359096097" lastFinishedPulling="2026-01-06 14:29:45.23028389 +0000 UTC m=+1803.769971554" observedRunningTime="2026-01-06 14:29:45.911602822 +0000 UTC m=+1804.451290516" watchObservedRunningTime="2026-01-06 14:29:45.917492057 +0000 UTC m=+1804.457179721" Jan 06 14:29:50 crc kubenswrapper[4869]: I0106 14:29:50.051994 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mlp2w"] Jan 06 14:29:50 crc kubenswrapper[4869]: I0106 14:29:50.062381 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mlp2w"] Jan 06 14:29:51 crc kubenswrapper[4869]: I0106 14:29:51.715476 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae6e299-373f-4381-ac37-5aadba9f902f" path="/var/lib/kubelet/pods/bae6e299-373f-4381-ac37-5aadba9f902f/volumes" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.504909 4869 scope.go:117] "RemoveContainer" containerID="8d9c15e047eafd4e3e8115c4419a4acf8f37c630b04a0c0ca324731fc604bfb2" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.538573 4869 scope.go:117] "RemoveContainer" containerID="a442fd258a4312bec1ba9f69131fc2f167a7c27cf6edd41fc11a7aedeb798cd8" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.595115 4869 scope.go:117] "RemoveContainer" containerID="78ac26826386d5faa21a28fe2dbe37fed31231000348d6e855e73758afeb49aa" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.662590 4869 scope.go:117] "RemoveContainer" containerID="3ecc254d269fae924cdc63861cb92522b9e11267e9dea177ef861735a6ab6e53" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.706166 4869 scope.go:117] "RemoveContainer" containerID="a92f016203396ce371d741145531991db784723689b0373db755971bd19606e6" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.765314 4869 scope.go:117] "RemoveContainer" containerID="0adbee660119e241e41e18d02fd623a28e5952ceafe45b08570787fa2f998f75" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.788007 4869 scope.go:117] "RemoveContainer" containerID="c65ea9c730db9bb85ab78dd88ea1cfa73a6fc8938dcdb7ddaf8f980c174cd27a" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.814958 4869 scope.go:117] "RemoveContainer" containerID="5218efbc4fe6ae2dc7deeadb2e4be6dd7f1e14ce709e74c0435437d8a7480171" Jan 06 14:29:53 crc kubenswrapper[4869]: I0106 14:29:53.839814 4869 scope.go:117] "RemoveContainer" containerID="44c925c3007a9cd06e71f4720ecee87ba629f9b34b1dd6fa33ef19cdf866fc5f" Jan 06 14:29:55 crc kubenswrapper[4869]: I0106 14:29:55.004134 4869 generic.go:334] "Generic (PLEG): container finished" podID="ffefabbd-c814-4a05-a557-95e0e07664f1" containerID="b2cf481f27b26951ff65ad884bec0a9db206e9e72f69cce25115dfb74ecf6d2a" exitCode=0 Jan 06 14:29:55 crc kubenswrapper[4869]: I0106 14:29:55.004261 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" event={"ID":"ffefabbd-c814-4a05-a557-95e0e07664f1","Type":"ContainerDied","Data":"b2cf481f27b26951ff65ad884bec0a9db206e9e72f69cce25115dfb74ecf6d2a"} Jan 06 14:29:55 crc kubenswrapper[4869]: I0106 14:29:55.709497 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:29:55 crc kubenswrapper[4869]: E0106 14:29:55.710191 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.500527 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.558037 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qhf\" (UniqueName: \"kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf\") pod \"ffefabbd-c814-4a05-a557-95e0e07664f1\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.558130 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory\") pod \"ffefabbd-c814-4a05-a557-95e0e07664f1\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.558167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam\") pod \"ffefabbd-c814-4a05-a557-95e0e07664f1\" (UID: \"ffefabbd-c814-4a05-a557-95e0e07664f1\") " Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.564891 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf" (OuterVolumeSpecName: "kube-api-access-86qhf") pod "ffefabbd-c814-4a05-a557-95e0e07664f1" (UID: "ffefabbd-c814-4a05-a557-95e0e07664f1"). InnerVolumeSpecName "kube-api-access-86qhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.583572 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffefabbd-c814-4a05-a557-95e0e07664f1" (UID: "ffefabbd-c814-4a05-a557-95e0e07664f1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.585824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory" (OuterVolumeSpecName: "inventory") pod "ffefabbd-c814-4a05-a557-95e0e07664f1" (UID: "ffefabbd-c814-4a05-a557-95e0e07664f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.660363 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.660419 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffefabbd-c814-4a05-a557-95e0e07664f1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:56 crc kubenswrapper[4869]: I0106 14:29:56.660443 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86qhf\" (UniqueName: \"kubernetes.io/projected/ffefabbd-c814-4a05-a557-95e0e07664f1-kube-api-access-86qhf\") on node \"crc\" DevicePath \"\"" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.037482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" event={"ID":"ffefabbd-c814-4a05-a557-95e0e07664f1","Type":"ContainerDied","Data":"65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5"} Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.038005 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a516d11a153e2acfb3bdcd03c8fbfbae3b2edcbb9131a51eaf1ca4c3ffc3e5" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.037799 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.129564 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq"] Jan 06 14:29:57 crc kubenswrapper[4869]: E0106 14:29:57.130163 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffefabbd-c814-4a05-a557-95e0e07664f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.130247 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffefabbd-c814-4a05-a557-95e0e07664f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.130534 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffefabbd-c814-4a05-a557-95e0e07664f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.131184 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.134599 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.135065 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.135594 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.144920 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq"] Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.188015 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.292124 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.292337 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lnq\" (UniqueName: \"kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.292431 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.394546 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lnq\" (UniqueName: \"kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.394638 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.394783 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.404198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.404338 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.419241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lnq\" (UniqueName: \"kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:57 crc kubenswrapper[4869]: I0106 14:29:57.508739 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:29:58 crc kubenswrapper[4869]: I0106 14:29:58.037563 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq"] Jan 06 14:29:59 crc kubenswrapper[4869]: I0106 14:29:59.060241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" event={"ID":"066e87d6-6d7a-487c-b142-0dd883402ebf","Type":"ContainerStarted","Data":"6a9c97a86e992256dee20ef2f2954fec396d3a57fa394162aeb923f0c7e45653"} Jan 06 14:29:59 crc kubenswrapper[4869]: I0106 14:29:59.061940 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" event={"ID":"066e87d6-6d7a-487c-b142-0dd883402ebf","Type":"ContainerStarted","Data":"0e09815bc6e89f321897797af29130c68b476d603fe0242ca97007f9465199ad"} Jan 06 14:29:59 crc kubenswrapper[4869]: I0106 14:29:59.098232 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" podStartSLOduration=1.6700080320000001 podStartE2EDuration="2.098203831s" podCreationTimestamp="2026-01-06 14:29:57 +0000 UTC" firstStartedPulling="2026-01-06 14:29:58.054617969 +0000 UTC m=+1816.594305633" lastFinishedPulling="2026-01-06 14:29:58.482813738 +0000 UTC m=+1817.022501432" observedRunningTime="2026-01-06 14:29:59.084477225 +0000 UTC m=+1817.624164899" watchObservedRunningTime="2026-01-06 14:29:59.098203831 +0000 UTC m=+1817.637891535" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.150907 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7"] Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.152860 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.154965 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.159118 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.165690 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7"] Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.258105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.258207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zq9\" (UniqueName: \"kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.258309 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.360223 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.360284 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zq9\" (UniqueName: \"kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.360328 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.361198 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.366127 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.380921 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zq9\" (UniqueName: \"kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9\") pod \"collect-profiles-29461830-wspx7\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.483924 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:00 crc kubenswrapper[4869]: I0106 14:30:00.966396 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7"] Jan 06 14:30:01 crc kubenswrapper[4869]: I0106 14:30:01.083767 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" event={"ID":"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b","Type":"ContainerStarted","Data":"ca1af00c19d35781d012410542b81213a04e39199ce8dd500d20206dd433d6e1"} Jan 06 14:30:02 crc kubenswrapper[4869]: I0106 14:30:02.097260 4869 generic.go:334] "Generic (PLEG): container finished" podID="a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" containerID="30a8ca07dd1e146a54d6d0eaf6c8ec9484c2bb24ad295c0d151faa6d5969e692" exitCode=0 Jan 06 14:30:02 crc kubenswrapper[4869]: I0106 14:30:02.097404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" event={"ID":"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b","Type":"ContainerDied","Data":"30a8ca07dd1e146a54d6d0eaf6c8ec9484c2bb24ad295c0d151faa6d5969e692"} Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.438062 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.522413 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume\") pod \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.522546 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume\") pod \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.522772 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zq9\" (UniqueName: \"kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9\") pod \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\" (UID: \"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b\") " Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.523281 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" (UID: "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.523918 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.527844 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9" (OuterVolumeSpecName: "kube-api-access-c4zq9") pod "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" (UID: "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b"). InnerVolumeSpecName "kube-api-access-c4zq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.541719 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" (UID: "a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.625958 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4zq9\" (UniqueName: \"kubernetes.io/projected/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-kube-api-access-c4zq9\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:03 crc kubenswrapper[4869]: I0106 14:30:03.626028 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:04 crc kubenswrapper[4869]: I0106 14:30:04.124465 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" event={"ID":"a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b","Type":"ContainerDied","Data":"ca1af00c19d35781d012410542b81213a04e39199ce8dd500d20206dd433d6e1"} Jan 06 14:30:04 crc kubenswrapper[4869]: I0106 14:30:04.124867 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca1af00c19d35781d012410542b81213a04e39199ce8dd500d20206dd433d6e1" Jan 06 14:30:04 crc kubenswrapper[4869]: I0106 14:30:04.124703 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461830-wspx7" Jan 06 14:30:08 crc kubenswrapper[4869]: I0106 14:30:08.038906 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dh8p2"] Jan 06 14:30:08 crc kubenswrapper[4869]: I0106 14:30:08.046562 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dh8p2"] Jan 06 14:30:08 crc kubenswrapper[4869]: I0106 14:30:08.705531 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:30:08 crc kubenswrapper[4869]: E0106 14:30:08.706057 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.032821 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rqkfr"] Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.043105 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rqkfr"] Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.170988 4869 generic.go:334] "Generic (PLEG): container finished" podID="066e87d6-6d7a-487c-b142-0dd883402ebf" containerID="6a9c97a86e992256dee20ef2f2954fec396d3a57fa394162aeb923f0c7e45653" exitCode=0 Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.171089 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" event={"ID":"066e87d6-6d7a-487c-b142-0dd883402ebf","Type":"ContainerDied","Data":"6a9c97a86e992256dee20ef2f2954fec396d3a57fa394162aeb923f0c7e45653"} Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.727070 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9808d05c-1692-4b1f-b1be-5060fc290609" path="/var/lib/kubelet/pods/9808d05c-1692-4b1f-b1be-5060fc290609/volumes" Jan 06 14:30:09 crc kubenswrapper[4869]: I0106 14:30:09.727997 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f" path="/var/lib/kubelet/pods/e5fce302-85b2-4b6d-8a2e-b4ba8b87a55f/volumes" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.789797 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.892270 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4lnq\" (UniqueName: \"kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq\") pod \"066e87d6-6d7a-487c-b142-0dd883402ebf\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.892362 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam\") pod \"066e87d6-6d7a-487c-b142-0dd883402ebf\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.892535 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory\") pod \"066e87d6-6d7a-487c-b142-0dd883402ebf\" (UID: \"066e87d6-6d7a-487c-b142-0dd883402ebf\") " Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.898512 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq" (OuterVolumeSpecName: "kube-api-access-k4lnq") pod "066e87d6-6d7a-487c-b142-0dd883402ebf" (UID: "066e87d6-6d7a-487c-b142-0dd883402ebf"). InnerVolumeSpecName "kube-api-access-k4lnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.920535 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory" (OuterVolumeSpecName: "inventory") pod "066e87d6-6d7a-487c-b142-0dd883402ebf" (UID: "066e87d6-6d7a-487c-b142-0dd883402ebf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.939295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "066e87d6-6d7a-487c-b142-0dd883402ebf" (UID: "066e87d6-6d7a-487c-b142-0dd883402ebf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.994543 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4lnq\" (UniqueName: \"kubernetes.io/projected/066e87d6-6d7a-487c-b142-0dd883402ebf-kube-api-access-k4lnq\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.994616 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:10 crc kubenswrapper[4869]: I0106 14:30:10.994630 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/066e87d6-6d7a-487c-b142-0dd883402ebf-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:30:11 crc kubenswrapper[4869]: I0106 14:30:11.191096 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" event={"ID":"066e87d6-6d7a-487c-b142-0dd883402ebf","Type":"ContainerDied","Data":"0e09815bc6e89f321897797af29130c68b476d603fe0242ca97007f9465199ad"} Jan 06 14:30:11 crc kubenswrapper[4869]: I0106 14:30:11.191138 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e09815bc6e89f321897797af29130c68b476d603fe0242ca97007f9465199ad" Jan 06 14:30:11 crc kubenswrapper[4869]: I0106 14:30:11.191188 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq" Jan 06 14:30:22 crc kubenswrapper[4869]: I0106 14:30:22.718326 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:30:22 crc kubenswrapper[4869]: E0106 14:30:22.719485 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:30:37 crc kubenswrapper[4869]: I0106 14:30:37.704280 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:30:38 crc kubenswrapper[4869]: I0106 14:30:38.498590 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816"} Jan 06 14:30:54 crc kubenswrapper[4869]: I0106 14:30:54.021831 4869 scope.go:117] "RemoveContainer" containerID="5f2cb8b410e69514df37a550dddbb195cb3b96f1de61343c8dbc5ac28bc18d8a" Jan 06 14:30:54 crc kubenswrapper[4869]: I0106 14:30:54.063799 4869 scope.go:117] "RemoveContainer" containerID="6612a01c1789b89004e1731656d73c966545dd278df7d0351f8fc39fd576fc62" Jan 06 14:30:55 crc kubenswrapper[4869]: I0106 14:30:55.042280 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9qkx"] Jan 06 14:30:55 crc kubenswrapper[4869]: I0106 14:30:55.049709 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l9qkx"] Jan 06 14:30:55 crc kubenswrapper[4869]: I0106 14:30:55.715249 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c6f6bc0-798b-494a-96d0-a27db4a8acf6" path="/var/lib/kubelet/pods/0c6f6bc0-798b-494a-96d0-a27db4a8acf6/volumes" Jan 06 14:31:54 crc kubenswrapper[4869]: I0106 14:31:54.207578 4869 scope.go:117] "RemoveContainer" containerID="21075043b994dc8e25b5abed974d30d7b0637788f4ab412e99ecb4001081a32b" Jan 06 14:32:54 crc kubenswrapper[4869]: I0106 14:32:54.316531 4869 scope.go:117] "RemoveContainer" containerID="40dfe1a6fce454f784da5f857b15792e96636ab0f30f698865cd994cf3ea7a06" Jan 06 14:32:54 crc kubenswrapper[4869]: I0106 14:32:54.340419 4869 scope.go:117] "RemoveContainer" containerID="14129807b691d702ae6a174ef0df839cda0831521fae4e8c0e05ab4049c6059c" Jan 06 14:32:54 crc kubenswrapper[4869]: I0106 14:32:54.370453 4869 scope.go:117] "RemoveContainer" containerID="36b2853b105e021626ffd80e8a4fcc69b950ad7ef7dd521863a3a822fb240aea" Jan 06 14:33:03 crc kubenswrapper[4869]: I0106 14:33:03.622208 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:33:03 crc kubenswrapper[4869]: I0106 14:33:03.622981 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:33:33 crc kubenswrapper[4869]: I0106 14:33:33.621861 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:33:33 crc kubenswrapper[4869]: I0106 14:33:33.622484 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.277009 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.287461 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ztfnx"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.294299 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.300116 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.305783 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.311267 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.316655 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.322414 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d5hv6"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.333784 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-zz4q7"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.341296 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.352036 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-8zqct"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.358146 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-s6b4l"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.364097 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wk646"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.370124 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.376149 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.383133 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7qjr5"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.390131 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-gnv5q"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.396229 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d7dmt"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.402182 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wk646"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.407994 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rr7dq"] Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.716031 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="066e87d6-6d7a-487c-b142-0dd883402ebf" path="/var/lib/kubelet/pods/066e87d6-6d7a-487c-b142-0dd883402ebf/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.716938 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bfa4b45-9040-4ea6-b8e1-0fd641cb4761" path="/var/lib/kubelet/pods/0bfa4b45-9040-4ea6-b8e1-0fd641cb4761/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.717568 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f91c71f-6aed-457a-a9dc-29501d415575" path="/var/lib/kubelet/pods/1f91c71f-6aed-457a-a9dc-29501d415575/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.718102 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200765a0-cf26-4e96-bee0-30dd911e7576" path="/var/lib/kubelet/pods/200765a0-cf26-4e96-bee0-30dd911e7576/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.719115 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e118b-1f27-46d6-aca1-5060fb3ba1aa" path="/var/lib/kubelet/pods/496e118b-1f27-46d6-aca1-5060fb3ba1aa/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.719592 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809af13b-e2f3-4eed-a5dc-9de20cab3ef4" path="/var/lib/kubelet/pods/809af13b-e2f3-4eed-a5dc-9de20cab3ef4/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.720111 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83886cc7-f2ac-4f61-bf3e-18eee999fae1" path="/var/lib/kubelet/pods/83886cc7-f2ac-4f61-bf3e-18eee999fae1/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.721538 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f450f40-ddac-47b1-b571-35d3c04fdcfc" path="/var/lib/kubelet/pods/8f450f40-ddac-47b1-b571-35d3c04fdcfc/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.722400 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a3066-8fd0-4ce8-be80-4fab4f8c9042" path="/var/lib/kubelet/pods/b05a3066-8fd0-4ce8-be80-4fab4f8c9042/volumes" Jan 06 14:33:43 crc kubenswrapper[4869]: I0106 14:33:43.725068 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffefabbd-c814-4a05-a557-95e0e07664f1" path="/var/lib/kubelet/pods/ffefabbd-c814-4a05-a557-95e0e07664f1/volumes" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.960642 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5"] Jan 06 14:33:48 crc kubenswrapper[4869]: E0106 14:33:48.961802 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" containerName="collect-profiles" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.961815 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" containerName="collect-profiles" Jan 06 14:33:48 crc kubenswrapper[4869]: E0106 14:33:48.961843 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="066e87d6-6d7a-487c-b142-0dd883402ebf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.961852 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="066e87d6-6d7a-487c-b142-0dd883402ebf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.962018 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3aa3b05-7719-4dc1-b3c5-2cc8e8de659b" containerName="collect-profiles" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.962030 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="066e87d6-6d7a-487c-b142-0dd883402ebf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.962610 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.964377 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.965163 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.965328 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.965475 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.965580 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:33:48 crc kubenswrapper[4869]: I0106 14:33:48.989111 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5"] Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.040148 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.040188 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.040251 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.040293 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp2d5\" (UniqueName: \"kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.040328 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.140988 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.141059 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp2d5\" (UniqueName: \"kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.141103 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.141172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.141196 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.147243 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.147657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.147797 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.154647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.158057 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp2d5\" (UniqueName: \"kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.289744 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.838069 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5"] Jan 06 14:33:49 crc kubenswrapper[4869]: I0106 14:33:49.844571 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:33:50 crc kubenswrapper[4869]: I0106 14:33:50.507456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" event={"ID":"ab1d6597-5db5-4759-b339-5ad35fcdbd8a","Type":"ContainerStarted","Data":"935bbce09e739bf092c84b20cd85dd46827d8a426186873b1174ab8c6b137f76"} Jan 06 14:33:50 crc kubenswrapper[4869]: I0106 14:33:50.507524 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" event={"ID":"ab1d6597-5db5-4759-b339-5ad35fcdbd8a","Type":"ContainerStarted","Data":"836e74f87984bbd6efc13d20df2123f8c9b31d09e0dfba1ed3b7dd1be54343c8"} Jan 06 14:33:50 crc kubenswrapper[4869]: I0106 14:33:50.531159 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" podStartSLOduration=2.091949894 podStartE2EDuration="2.531140874s" podCreationTimestamp="2026-01-06 14:33:48 +0000 UTC" firstStartedPulling="2026-01-06 14:33:49.844349073 +0000 UTC m=+2048.384036737" lastFinishedPulling="2026-01-06 14:33:50.283540053 +0000 UTC m=+2048.823227717" observedRunningTime="2026-01-06 14:33:50.52535343 +0000 UTC m=+2049.065041134" watchObservedRunningTime="2026-01-06 14:33:50.531140874 +0000 UTC m=+2049.070828538" Jan 06 14:33:54 crc kubenswrapper[4869]: I0106 14:33:54.460972 4869 scope.go:117] "RemoveContainer" containerID="4b10df9a0bda839b5273b1b6703d809d616346977921fd72fa1951d2a4a0512a" Jan 06 14:33:54 crc kubenswrapper[4869]: I0106 14:33:54.550316 4869 scope.go:117] "RemoveContainer" containerID="a3692061bd5a107051fbf79c200c38647ee44d578d0d02e8cbcbbd2305ab4114" Jan 06 14:33:54 crc kubenswrapper[4869]: I0106 14:33:54.590188 4869 scope.go:117] "RemoveContainer" containerID="9140e1c73268098bf770faeae60957604859e22ed806a4d1f7848347b08653b5" Jan 06 14:33:54 crc kubenswrapper[4869]: I0106 14:33:54.635891 4869 scope.go:117] "RemoveContainer" containerID="68ce1357359b3094465a3024e49358c0c13ea052405d75e7af6cd30c1a391016" Jan 06 14:33:54 crc kubenswrapper[4869]: I0106 14:33:54.664221 4869 scope.go:117] "RemoveContainer" containerID="2a9d64a12fd3b35e4b2549e66b98f96a3c3184c64f588b0df588a4ba75bf2048" Jan 06 14:34:02 crc kubenswrapper[4869]: I0106 14:34:02.630416 4869 generic.go:334] "Generic (PLEG): container finished" podID="ab1d6597-5db5-4759-b339-5ad35fcdbd8a" containerID="935bbce09e739bf092c84b20cd85dd46827d8a426186873b1174ab8c6b137f76" exitCode=0 Jan 06 14:34:02 crc kubenswrapper[4869]: I0106 14:34:02.630501 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" event={"ID":"ab1d6597-5db5-4759-b339-5ad35fcdbd8a","Type":"ContainerDied","Data":"935bbce09e739bf092c84b20cd85dd46827d8a426186873b1174ab8c6b137f76"} Jan 06 14:34:03 crc kubenswrapper[4869]: I0106 14:34:03.622445 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:34:03 crc kubenswrapper[4869]: I0106 14:34:03.622835 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:34:03 crc kubenswrapper[4869]: I0106 14:34:03.622883 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:34:03 crc kubenswrapper[4869]: I0106 14:34:03.623680 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:34:03 crc kubenswrapper[4869]: I0106 14:34:03.623745 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816" gracePeriod=600 Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.161816 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.360544 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory\") pod \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.361117 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle\") pod \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.361145 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam\") pod \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.361230 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph\") pod \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.361313 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp2d5\" (UniqueName: \"kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5\") pod \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\" (UID: \"ab1d6597-5db5-4759-b339-5ad35fcdbd8a\") " Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.368257 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph" (OuterVolumeSpecName: "ceph") pod "ab1d6597-5db5-4759-b339-5ad35fcdbd8a" (UID: "ab1d6597-5db5-4759-b339-5ad35fcdbd8a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.369048 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ab1d6597-5db5-4759-b339-5ad35fcdbd8a" (UID: "ab1d6597-5db5-4759-b339-5ad35fcdbd8a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.375833 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5" (OuterVolumeSpecName: "kube-api-access-dp2d5") pod "ab1d6597-5db5-4759-b339-5ad35fcdbd8a" (UID: "ab1d6597-5db5-4759-b339-5ad35fcdbd8a"). InnerVolumeSpecName "kube-api-access-dp2d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.392692 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab1d6597-5db5-4759-b339-5ad35fcdbd8a" (UID: "ab1d6597-5db5-4759-b339-5ad35fcdbd8a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.400628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory" (OuterVolumeSpecName: "inventory") pod "ab1d6597-5db5-4759-b339-5ad35fcdbd8a" (UID: "ab1d6597-5db5-4759-b339-5ad35fcdbd8a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.464305 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp2d5\" (UniqueName: \"kubernetes.io/projected/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-kube-api-access-dp2d5\") on node \"crc\" DevicePath \"\"" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.464351 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.464427 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.464446 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.464459 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ab1d6597-5db5-4759-b339-5ad35fcdbd8a-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.661815 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816"} Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.661859 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816" exitCode=0 Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.661963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af"} Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.661999 4869 scope.go:117] "RemoveContainer" containerID="590679c878f517cb769acf589dc0fc782f75c9ebf5bc345c242759d8f84bc50f" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.665082 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" event={"ID":"ab1d6597-5db5-4759-b339-5ad35fcdbd8a","Type":"ContainerDied","Data":"836e74f87984bbd6efc13d20df2123f8c9b31d09e0dfba1ed3b7dd1be54343c8"} Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.665120 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="836e74f87984bbd6efc13d20df2123f8c9b31d09e0dfba1ed3b7dd1be54343c8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.665245 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.777166 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8"] Jan 06 14:34:04 crc kubenswrapper[4869]: E0106 14:34:04.778068 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1d6597-5db5-4759-b339-5ad35fcdbd8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.778084 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1d6597-5db5-4759-b339-5ad35fcdbd8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.778284 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1d6597-5db5-4759-b339-5ad35fcdbd8a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.778971 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.790189 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8"] Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.827204 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.827442 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.828110 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.828344 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.828512 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.887264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.887334 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.887374 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.887429 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.887472 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg2xc\" (UniqueName: \"kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.989520 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.989736 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.989823 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.989877 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.990110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg2xc\" (UniqueName: \"kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.996647 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.996840 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:04 crc kubenswrapper[4869]: I0106 14:34:04.997764 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:05 crc kubenswrapper[4869]: I0106 14:34:05.002943 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:05 crc kubenswrapper[4869]: I0106 14:34:05.020617 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg2xc\" (UniqueName: \"kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:05 crc kubenswrapper[4869]: I0106 14:34:05.147429 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:34:05 crc kubenswrapper[4869]: I0106 14:34:05.543521 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8"] Jan 06 14:34:05 crc kubenswrapper[4869]: W0106 14:34:05.544853 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99252e16_5d75_4719_84a1_80ef3a8bfa39.slice/crio-9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b WatchSource:0}: Error finding container 9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b: Status 404 returned error can't find the container with id 9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b Jan 06 14:34:05 crc kubenswrapper[4869]: I0106 14:34:05.679217 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" event={"ID":"99252e16-5d75-4719-84a1-80ef3a8bfa39","Type":"ContainerStarted","Data":"9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b"} Jan 06 14:34:06 crc kubenswrapper[4869]: I0106 14:34:06.694771 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" event={"ID":"99252e16-5d75-4719-84a1-80ef3a8bfa39","Type":"ContainerStarted","Data":"dc58c4ad4a90f44c16efc3e991ce518b5642a4d2a6ba964f2c8dd1a7ae9ebc40"} Jan 06 14:34:06 crc kubenswrapper[4869]: I0106 14:34:06.716027 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" podStartSLOduration=2.209149822 podStartE2EDuration="2.716000351s" podCreationTimestamp="2026-01-06 14:34:04 +0000 UTC" firstStartedPulling="2026-01-06 14:34:05.54778535 +0000 UTC m=+2064.087473014" lastFinishedPulling="2026-01-06 14:34:06.054635879 +0000 UTC m=+2064.594323543" observedRunningTime="2026-01-06 14:34:06.71516861 +0000 UTC m=+2065.254856274" watchObservedRunningTime="2026-01-06 14:34:06.716000351 +0000 UTC m=+2065.255688045" Jan 06 14:34:54 crc kubenswrapper[4869]: I0106 14:34:54.797730 4869 scope.go:117] "RemoveContainer" containerID="6146b3f7d9edd3f621fa613299e24db32247b1f424e3896d6bdc33902f806203" Jan 06 14:34:54 crc kubenswrapper[4869]: I0106 14:34:54.868837 4869 scope.go:117] "RemoveContainer" containerID="1a483f8e504a483407d023d5a2e5f4bf7ff2a885b53c3cd015ab222c33a525d8" Jan 06 14:35:49 crc kubenswrapper[4869]: I0106 14:35:49.639363 4869 generic.go:334] "Generic (PLEG): container finished" podID="99252e16-5d75-4719-84a1-80ef3a8bfa39" containerID="dc58c4ad4a90f44c16efc3e991ce518b5642a4d2a6ba964f2c8dd1a7ae9ebc40" exitCode=0 Jan 06 14:35:49 crc kubenswrapper[4869]: I0106 14:35:49.639490 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" event={"ID":"99252e16-5d75-4719-84a1-80ef3a8bfa39","Type":"ContainerDied","Data":"dc58c4ad4a90f44c16efc3e991ce518b5642a4d2a6ba964f2c8dd1a7ae9ebc40"} Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.094479 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.198241 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam\") pod \"99252e16-5d75-4719-84a1-80ef3a8bfa39\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.198329 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph\") pod \"99252e16-5d75-4719-84a1-80ef3a8bfa39\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.198411 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory\") pod \"99252e16-5d75-4719-84a1-80ef3a8bfa39\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.198455 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle\") pod \"99252e16-5d75-4719-84a1-80ef3a8bfa39\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.198523 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg2xc\" (UniqueName: \"kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc\") pod \"99252e16-5d75-4719-84a1-80ef3a8bfa39\" (UID: \"99252e16-5d75-4719-84a1-80ef3a8bfa39\") " Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.204153 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph" (OuterVolumeSpecName: "ceph") pod "99252e16-5d75-4719-84a1-80ef3a8bfa39" (UID: "99252e16-5d75-4719-84a1-80ef3a8bfa39"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.204369 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "99252e16-5d75-4719-84a1-80ef3a8bfa39" (UID: "99252e16-5d75-4719-84a1-80ef3a8bfa39"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.205677 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc" (OuterVolumeSpecName: "kube-api-access-dg2xc") pod "99252e16-5d75-4719-84a1-80ef3a8bfa39" (UID: "99252e16-5d75-4719-84a1-80ef3a8bfa39"). InnerVolumeSpecName "kube-api-access-dg2xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.224988 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory" (OuterVolumeSpecName: "inventory") pod "99252e16-5d75-4719-84a1-80ef3a8bfa39" (UID: "99252e16-5d75-4719-84a1-80ef3a8bfa39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.236827 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99252e16-5d75-4719-84a1-80ef3a8bfa39" (UID: "99252e16-5d75-4719-84a1-80ef3a8bfa39"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.300881 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg2xc\" (UniqueName: \"kubernetes.io/projected/99252e16-5d75-4719-84a1-80ef3a8bfa39-kube-api-access-dg2xc\") on node \"crc\" DevicePath \"\"" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.300941 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.300958 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.300971 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.300984 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99252e16-5d75-4719-84a1-80ef3a8bfa39-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.657566 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" event={"ID":"99252e16-5d75-4719-84a1-80ef3a8bfa39","Type":"ContainerDied","Data":"9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b"} Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.657653 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fba4f388b0030bc0437905b07fcd60119eb40648bf26dc39a81292c7f26866b" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.658045 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.749912 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk"] Jan 06 14:35:51 crc kubenswrapper[4869]: E0106 14:35:51.750328 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99252e16-5d75-4719-84a1-80ef3a8bfa39" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.750352 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="99252e16-5d75-4719-84a1-80ef3a8bfa39" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.750577 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="99252e16-5d75-4719-84a1-80ef3a8bfa39" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.751547 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.753720 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.756379 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.756714 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.757301 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.758607 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.759820 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk"] Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.810307 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.810606 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.810828 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52zn\" (UniqueName: \"kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.810945 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.912541 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.912618 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.912702 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.912796 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52zn\" (UniqueName: \"kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.916503 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.916734 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.922483 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:51 crc kubenswrapper[4869]: I0106 14:35:51.929453 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52zn\" (UniqueName: \"kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:52 crc kubenswrapper[4869]: I0106 14:35:52.074371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:35:52 crc kubenswrapper[4869]: I0106 14:35:52.604745 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk"] Jan 06 14:35:52 crc kubenswrapper[4869]: I0106 14:35:52.665863 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" event={"ID":"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1","Type":"ContainerStarted","Data":"440725df359167dfbc533cceacdb6951a5f1f77b3d3c4fb61faef21ba1cf2659"} Jan 06 14:35:53 crc kubenswrapper[4869]: I0106 14:35:53.678034 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" event={"ID":"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1","Type":"ContainerStarted","Data":"87f56a64b6b822cab4f477341f46c90fc56dec885d64afec73716ccf33d08247"} Jan 06 14:35:53 crc kubenswrapper[4869]: I0106 14:35:53.699974 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" podStartSLOduration=2.291528372 podStartE2EDuration="2.699952765s" podCreationTimestamp="2026-01-06 14:35:51 +0000 UTC" firstStartedPulling="2026-01-06 14:35:52.612466327 +0000 UTC m=+2171.152154011" lastFinishedPulling="2026-01-06 14:35:53.02089074 +0000 UTC m=+2171.560578404" observedRunningTime="2026-01-06 14:35:53.693795859 +0000 UTC m=+2172.233483523" watchObservedRunningTime="2026-01-06 14:35:53.699952765 +0000 UTC m=+2172.239640429" Jan 06 14:35:54 crc kubenswrapper[4869]: I0106 14:35:54.984348 4869 scope.go:117] "RemoveContainer" containerID="351e7c373729b88778d24b639e18c87b1d6846663e808d9d931391ccfd1de8ef" Jan 06 14:35:55 crc kubenswrapper[4869]: I0106 14:35:55.017409 4869 scope.go:117] "RemoveContainer" containerID="b2cf481f27b26951ff65ad884bec0a9db206e9e72f69cce25115dfb74ecf6d2a" Jan 06 14:36:03 crc kubenswrapper[4869]: I0106 14:36:03.622420 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:36:03 crc kubenswrapper[4869]: I0106 14:36:03.622983 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:36:19 crc kubenswrapper[4869]: I0106 14:36:19.927181 4869 generic.go:334] "Generic (PLEG): container finished" podID="f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" containerID="87f56a64b6b822cab4f477341f46c90fc56dec885d64afec73716ccf33d08247" exitCode=0 Jan 06 14:36:19 crc kubenswrapper[4869]: I0106 14:36:19.927274 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" event={"ID":"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1","Type":"ContainerDied","Data":"87f56a64b6b822cab4f477341f46c90fc56dec885d64afec73716ccf33d08247"} Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.392810 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.520786 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam\") pod \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.520987 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52zn\" (UniqueName: \"kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn\") pod \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.521066 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph\") pod \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.521090 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory\") pod \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\" (UID: \"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1\") " Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.552997 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph" (OuterVolumeSpecName: "ceph") pod "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" (UID: "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.560032 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn" (OuterVolumeSpecName: "kube-api-access-f52zn") pod "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" (UID: "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1"). InnerVolumeSpecName "kube-api-access-f52zn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.568006 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory" (OuterVolumeSpecName: "inventory") pod "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" (UID: "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.570132 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" (UID: "f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.627865 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52zn\" (UniqueName: \"kubernetes.io/projected/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-kube-api-access-f52zn\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.627929 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.627948 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.628026 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.942179 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" event={"ID":"f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1","Type":"ContainerDied","Data":"440725df359167dfbc533cceacdb6951a5f1f77b3d3c4fb61faef21ba1cf2659"} Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.942246 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440725df359167dfbc533cceacdb6951a5f1f77b3d3c4fb61faef21ba1cf2659" Jan 06 14:36:21 crc kubenswrapper[4869]: I0106 14:36:21.942315 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.051997 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r"] Jan 06 14:36:22 crc kubenswrapper[4869]: E0106 14:36:22.052444 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.052470 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.052655 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.053401 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.059192 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.059573 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.060311 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.060781 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.061024 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.070637 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r"] Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.136545 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.136837 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.137141 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfm8n\" (UniqueName: \"kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.137437 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.239508 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfm8n\" (UniqueName: \"kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.239590 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.239629 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.239691 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.245857 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.248241 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.268395 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.271292 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfm8n\" (UniqueName: \"kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.376796 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.896560 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r"] Jan 06 14:36:22 crc kubenswrapper[4869]: I0106 14:36:22.951457 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" event={"ID":"c0c8d127-5a60-4d66-8c61-1b430b37a374","Type":"ContainerStarted","Data":"86eaac21d141962b079a5f02670740f06f0ff11a26f6453dbd7ea51a89350dd6"} Jan 06 14:36:23 crc kubenswrapper[4869]: I0106 14:36:23.959456 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" event={"ID":"c0c8d127-5a60-4d66-8c61-1b430b37a374","Type":"ContainerStarted","Data":"6f0a840dfbfa6761bbc364a611f5491dce1544a1818349a77eb483aff5cedfbc"} Jan 06 14:36:23 crc kubenswrapper[4869]: I0106 14:36:23.979651 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" podStartSLOduration=1.29006639 podStartE2EDuration="1.979626455s" podCreationTimestamp="2026-01-06 14:36:22 +0000 UTC" firstStartedPulling="2026-01-06 14:36:22.907499123 +0000 UTC m=+2201.447186787" lastFinishedPulling="2026-01-06 14:36:23.597059148 +0000 UTC m=+2202.136746852" observedRunningTime="2026-01-06 14:36:23.974616027 +0000 UTC m=+2202.514303701" watchObservedRunningTime="2026-01-06 14:36:23.979626455 +0000 UTC m=+2202.519314119" Jan 06 14:36:30 crc kubenswrapper[4869]: I0106 14:36:30.014567 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0c8d127-5a60-4d66-8c61-1b430b37a374" containerID="6f0a840dfbfa6761bbc364a611f5491dce1544a1818349a77eb483aff5cedfbc" exitCode=0 Jan 06 14:36:30 crc kubenswrapper[4869]: I0106 14:36:30.014752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" event={"ID":"c0c8d127-5a60-4d66-8c61-1b430b37a374","Type":"ContainerDied","Data":"6f0a840dfbfa6761bbc364a611f5491dce1544a1818349a77eb483aff5cedfbc"} Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.470324 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.659888 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfm8n\" (UniqueName: \"kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n\") pod \"c0c8d127-5a60-4d66-8c61-1b430b37a374\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.659983 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam\") pod \"c0c8d127-5a60-4d66-8c61-1b430b37a374\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.660018 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph\") pod \"c0c8d127-5a60-4d66-8c61-1b430b37a374\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.660135 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory\") pod \"c0c8d127-5a60-4d66-8c61-1b430b37a374\" (UID: \"c0c8d127-5a60-4d66-8c61-1b430b37a374\") " Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.665991 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n" (OuterVolumeSpecName: "kube-api-access-sfm8n") pod "c0c8d127-5a60-4d66-8c61-1b430b37a374" (UID: "c0c8d127-5a60-4d66-8c61-1b430b37a374"). InnerVolumeSpecName "kube-api-access-sfm8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.673843 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph" (OuterVolumeSpecName: "ceph") pod "c0c8d127-5a60-4d66-8c61-1b430b37a374" (UID: "c0c8d127-5a60-4d66-8c61-1b430b37a374"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.687459 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0c8d127-5a60-4d66-8c61-1b430b37a374" (UID: "c0c8d127-5a60-4d66-8c61-1b430b37a374"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.720117 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory" (OuterVolumeSpecName: "inventory") pod "c0c8d127-5a60-4d66-8c61-1b430b37a374" (UID: "c0c8d127-5a60-4d66-8c61-1b430b37a374"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.761731 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.761761 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.761770 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0c8d127-5a60-4d66-8c61-1b430b37a374-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:31 crc kubenswrapper[4869]: I0106 14:36:31.761780 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfm8n\" (UniqueName: \"kubernetes.io/projected/c0c8d127-5a60-4d66-8c61-1b430b37a374-kube-api-access-sfm8n\") on node \"crc\" DevicePath \"\"" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.033979 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" event={"ID":"c0c8d127-5a60-4d66-8c61-1b430b37a374","Type":"ContainerDied","Data":"86eaac21d141962b079a5f02670740f06f0ff11a26f6453dbd7ea51a89350dd6"} Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.034042 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86eaac21d141962b079a5f02670740f06f0ff11a26f6453dbd7ea51a89350dd6" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.034054 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.192614 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn"] Jan 06 14:36:32 crc kubenswrapper[4869]: E0106 14:36:32.193169 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c8d127-5a60-4d66-8c61-1b430b37a374" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.193195 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c8d127-5a60-4d66-8c61-1b430b37a374" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.193372 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c8d127-5a60-4d66-8c61-1b430b37a374" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.194371 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.196963 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.197022 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.199067 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.199686 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.202715 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.215183 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn"] Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.384071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wpsx\" (UniqueName: \"kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.384145 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.384887 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.384920 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.487124 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.487191 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.487269 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wpsx\" (UniqueName: \"kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.487317 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.492364 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.492585 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.492754 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.516319 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wpsx\" (UniqueName: \"kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-xdtdn\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:32 crc kubenswrapper[4869]: I0106 14:36:32.813459 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:36:33 crc kubenswrapper[4869]: I0106 14:36:33.375776 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn"] Jan 06 14:36:33 crc kubenswrapper[4869]: I0106 14:36:33.622498 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:36:33 crc kubenswrapper[4869]: I0106 14:36:33.622595 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:36:34 crc kubenswrapper[4869]: I0106 14:36:34.050116 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" event={"ID":"3ecdff9b-23e8-4883-9128-37da64316185","Type":"ContainerStarted","Data":"53b8e910df8cb4aa8d6f30a470d0a64786d09bd947d77305f89a1635139024e4"} Jan 06 14:36:35 crc kubenswrapper[4869]: I0106 14:36:35.063717 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" event={"ID":"3ecdff9b-23e8-4883-9128-37da64316185","Type":"ContainerStarted","Data":"2d60cbe3cb4810c9aee6a3b0fcdbd16339953b95a34174ec4d823bec423b1498"} Jan 06 14:36:35 crc kubenswrapper[4869]: I0106 14:36:35.092478 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" podStartSLOduration=2.608385117 podStartE2EDuration="3.092455151s" podCreationTimestamp="2026-01-06 14:36:32 +0000 UTC" firstStartedPulling="2026-01-06 14:36:33.390413772 +0000 UTC m=+2211.930101476" lastFinishedPulling="2026-01-06 14:36:33.874483806 +0000 UTC m=+2212.414171510" observedRunningTime="2026-01-06 14:36:35.090701109 +0000 UTC m=+2213.630388803" watchObservedRunningTime="2026-01-06 14:36:35.092455151 +0000 UTC m=+2213.632142825" Jan 06 14:36:55 crc kubenswrapper[4869]: I0106 14:36:55.090114 4869 scope.go:117] "RemoveContainer" containerID="6a9c97a86e992256dee20ef2f2954fec396d3a57fa394162aeb923f0c7e45653" Jan 06 14:36:59 crc kubenswrapper[4869]: I0106 14:36:59.998079 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.005655 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.044562 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.196601 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.196733 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.196782 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks9ph\" (UniqueName: \"kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.299338 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks9ph\" (UniqueName: \"kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.299485 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.299603 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.300048 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.300131 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.323128 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks9ph\" (UniqueName: \"kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph\") pod \"certified-operators-lm9vc\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.345053 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:00 crc kubenswrapper[4869]: I0106 14:37:00.891858 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:01 crc kubenswrapper[4869]: I0106 14:37:01.358894 4869 generic.go:334] "Generic (PLEG): container finished" podID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerID="73caeb09e24abe52efe163aeceb3272b4dbd2e0420fbf2b5d33fe3d6b3186b69" exitCode=0 Jan 06 14:37:01 crc kubenswrapper[4869]: I0106 14:37:01.358989 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerDied","Data":"73caeb09e24abe52efe163aeceb3272b4dbd2e0420fbf2b5d33fe3d6b3186b69"} Jan 06 14:37:01 crc kubenswrapper[4869]: I0106 14:37:01.359490 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerStarted","Data":"3811b114c14b6358f904b9551b2ad4eea92209fb245b51161152ffbd71f26a0d"} Jan 06 14:37:02 crc kubenswrapper[4869]: I0106 14:37:02.371049 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerStarted","Data":"99122ba007bbcf86cdb17e979aee8f5cb15824dd373bc9734640ccc359680585"} Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.384850 4869 generic.go:334] "Generic (PLEG): container finished" podID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerID="99122ba007bbcf86cdb17e979aee8f5cb15824dd373bc9734640ccc359680585" exitCode=0 Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.385183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerDied","Data":"99122ba007bbcf86cdb17e979aee8f5cb15824dd373bc9734640ccc359680585"} Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.622272 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.622371 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.622437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.623387 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:37:03 crc kubenswrapper[4869]: I0106 14:37:03.623498 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" gracePeriod=600 Jan 06 14:37:03 crc kubenswrapper[4869]: E0106 14:37:03.746807 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.394264 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerStarted","Data":"ebc77200527eaf9722ff9110488f3d6e38be6c64403a828c69ebd1f276da978e"} Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.396754 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" exitCode=0 Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.396801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af"} Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.396838 4869 scope.go:117] "RemoveContainer" containerID="6a60a4b71f0d885ff0258ff4ceb5607a593a8d1d043cc7d68f413ed6f7581816" Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.397501 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:37:04 crc kubenswrapper[4869]: E0106 14:37:04.397790 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:37:04 crc kubenswrapper[4869]: I0106 14:37:04.430449 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lm9vc" podStartSLOduration=2.990389644 podStartE2EDuration="5.43042848s" podCreationTimestamp="2026-01-06 14:36:59 +0000 UTC" firstStartedPulling="2026-01-06 14:37:01.361898282 +0000 UTC m=+2239.901585946" lastFinishedPulling="2026-01-06 14:37:03.801937118 +0000 UTC m=+2242.341624782" observedRunningTime="2026-01-06 14:37:04.413503737 +0000 UTC m=+2242.953191411" watchObservedRunningTime="2026-01-06 14:37:04.43042848 +0000 UTC m=+2242.970116144" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.370275 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.372627 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.388973 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.455859 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktvxt\" (UniqueName: \"kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.455941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.456053 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.557847 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktvxt\" (UniqueName: \"kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.557954 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.558015 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.558586 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.558597 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.583719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktvxt\" (UniqueName: \"kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt\") pod \"redhat-operators-68794\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:07 crc kubenswrapper[4869]: I0106 14:37:07.692121 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:08 crc kubenswrapper[4869]: I0106 14:37:08.237752 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:08 crc kubenswrapper[4869]: I0106 14:37:08.435909 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerStarted","Data":"87c752a332001d6363c1c54e59e92c8deadd115cc7cfc3c4033a20d2ca5f7981"} Jan 06 14:37:09 crc kubenswrapper[4869]: I0106 14:37:09.447097 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerID="61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558" exitCode=0 Jan 06 14:37:09 crc kubenswrapper[4869]: I0106 14:37:09.447276 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerDied","Data":"61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558"} Jan 06 14:37:10 crc kubenswrapper[4869]: I0106 14:37:10.345719 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:10 crc kubenswrapper[4869]: I0106 14:37:10.346174 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:10 crc kubenswrapper[4869]: I0106 14:37:10.409843 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:10 crc kubenswrapper[4869]: I0106 14:37:10.457339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerStarted","Data":"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca"} Jan 06 14:37:10 crc kubenswrapper[4869]: I0106 14:37:10.505331 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:12 crc kubenswrapper[4869]: I0106 14:37:12.566310 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:12 crc kubenswrapper[4869]: I0106 14:37:12.566905 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lm9vc" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="registry-server" containerID="cri-o://ebc77200527eaf9722ff9110488f3d6e38be6c64403a828c69ebd1f276da978e" gracePeriod=2 Jan 06 14:37:13 crc kubenswrapper[4869]: I0106 14:37:13.482541 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerID="a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca" exitCode=0 Jan 06 14:37:13 crc kubenswrapper[4869]: I0106 14:37:13.482643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerDied","Data":"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca"} Jan 06 14:37:14 crc kubenswrapper[4869]: I0106 14:37:14.493941 4869 generic.go:334] "Generic (PLEG): container finished" podID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerID="ebc77200527eaf9722ff9110488f3d6e38be6c64403a828c69ebd1f276da978e" exitCode=0 Jan 06 14:37:14 crc kubenswrapper[4869]: I0106 14:37:14.494019 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerDied","Data":"ebc77200527eaf9722ff9110488f3d6e38be6c64403a828c69ebd1f276da978e"} Jan 06 14:37:14 crc kubenswrapper[4869]: I0106 14:37:14.497005 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerStarted","Data":"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9"} Jan 06 14:37:14 crc kubenswrapper[4869]: I0106 14:37:14.527979 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-68794" podStartSLOduration=2.837752942 podStartE2EDuration="7.527952855s" podCreationTimestamp="2026-01-06 14:37:07 +0000 UTC" firstStartedPulling="2026-01-06 14:37:09.449416908 +0000 UTC m=+2247.989104572" lastFinishedPulling="2026-01-06 14:37:14.139616821 +0000 UTC m=+2252.679304485" observedRunningTime="2026-01-06 14:37:14.51973018 +0000 UTC m=+2253.059417834" watchObservedRunningTime="2026-01-06 14:37:14.527952855 +0000 UTC m=+2253.067640519" Jan 06 14:37:14 crc kubenswrapper[4869]: I0106 14:37:14.932573 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.125390 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content\") pod \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.125774 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities\") pod \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.125851 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks9ph\" (UniqueName: \"kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph\") pod \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\" (UID: \"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9\") " Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.127882 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities" (OuterVolumeSpecName: "utilities") pod "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" (UID: "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.134057 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph" (OuterVolumeSpecName: "kube-api-access-ks9ph") pod "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" (UID: "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9"). InnerVolumeSpecName "kube-api-access-ks9ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.185108 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" (UID: "e98ca56f-8a10-4aa9-b93a-ba363f6cbec9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.228175 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.228229 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks9ph\" (UniqueName: \"kubernetes.io/projected/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-kube-api-access-ks9ph\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.228249 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.507482 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lm9vc" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.507482 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lm9vc" event={"ID":"e98ca56f-8a10-4aa9-b93a-ba363f6cbec9","Type":"ContainerDied","Data":"3811b114c14b6358f904b9551b2ad4eea92209fb245b51161152ffbd71f26a0d"} Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.507861 4869 scope.go:117] "RemoveContainer" containerID="ebc77200527eaf9722ff9110488f3d6e38be6c64403a828c69ebd1f276da978e" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.511301 4869 generic.go:334] "Generic (PLEG): container finished" podID="3ecdff9b-23e8-4883-9128-37da64316185" containerID="2d60cbe3cb4810c9aee6a3b0fcdbd16339953b95a34174ec4d823bec423b1498" exitCode=0 Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.511355 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" event={"ID":"3ecdff9b-23e8-4883-9128-37da64316185","Type":"ContainerDied","Data":"2d60cbe3cb4810c9aee6a3b0fcdbd16339953b95a34174ec4d823bec423b1498"} Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.530342 4869 scope.go:117] "RemoveContainer" containerID="99122ba007bbcf86cdb17e979aee8f5cb15824dd373bc9734640ccc359680585" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.564780 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.565742 4869 scope.go:117] "RemoveContainer" containerID="73caeb09e24abe52efe163aeceb3272b4dbd2e0420fbf2b5d33fe3d6b3186b69" Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.578578 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lm9vc"] Jan 06 14:37:15 crc kubenswrapper[4869]: I0106 14:37:15.718447 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" path="/var/lib/kubelet/pods/e98ca56f-8a10-4aa9-b93a-ba363f6cbec9/volumes" Jan 06 14:37:16 crc kubenswrapper[4869]: I0106 14:37:16.705161 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:37:16 crc kubenswrapper[4869]: E0106 14:37:16.705592 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:37:16 crc kubenswrapper[4869]: I0106 14:37:16.892548 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.059936 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph\") pod \"3ecdff9b-23e8-4883-9128-37da64316185\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.060584 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wpsx\" (UniqueName: \"kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx\") pod \"3ecdff9b-23e8-4883-9128-37da64316185\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.061129 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam\") pod \"3ecdff9b-23e8-4883-9128-37da64316185\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.061281 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory\") pod \"3ecdff9b-23e8-4883-9128-37da64316185\" (UID: \"3ecdff9b-23e8-4883-9128-37da64316185\") " Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.067789 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph" (OuterVolumeSpecName: "ceph") pod "3ecdff9b-23e8-4883-9128-37da64316185" (UID: "3ecdff9b-23e8-4883-9128-37da64316185"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.067858 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx" (OuterVolumeSpecName: "kube-api-access-4wpsx") pod "3ecdff9b-23e8-4883-9128-37da64316185" (UID: "3ecdff9b-23e8-4883-9128-37da64316185"). InnerVolumeSpecName "kube-api-access-4wpsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.090868 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ecdff9b-23e8-4883-9128-37da64316185" (UID: "3ecdff9b-23e8-4883-9128-37da64316185"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.094015 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory" (OuterVolumeSpecName: "inventory") pod "3ecdff9b-23e8-4883-9128-37da64316185" (UID: "3ecdff9b-23e8-4883-9128-37da64316185"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.164181 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.164228 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.164241 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3ecdff9b-23e8-4883-9128-37da64316185-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.164257 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wpsx\" (UniqueName: \"kubernetes.io/projected/3ecdff9b-23e8-4883-9128-37da64316185-kube-api-access-4wpsx\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.528167 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" event={"ID":"3ecdff9b-23e8-4883-9128-37da64316185","Type":"ContainerDied","Data":"53b8e910df8cb4aa8d6f30a470d0a64786d09bd947d77305f89a1635139024e4"} Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.528223 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b8e910df8cb4aa8d6f30a470d0a64786d09bd947d77305f89a1635139024e4" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.528272 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-xdtdn" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.626491 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm"] Jan 06 14:37:17 crc kubenswrapper[4869]: E0106 14:37:17.626939 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ecdff9b-23e8-4883-9128-37da64316185" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.626962 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ecdff9b-23e8-4883-9128-37da64316185" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:17 crc kubenswrapper[4869]: E0106 14:37:17.626985 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="registry-server" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.626994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="registry-server" Jan 06 14:37:17 crc kubenswrapper[4869]: E0106 14:37:17.627028 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="extract-content" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.627037 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="extract-content" Jan 06 14:37:17 crc kubenswrapper[4869]: E0106 14:37:17.627060 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="extract-utilities" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.627069 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="extract-utilities" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.627289 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="e98ca56f-8a10-4aa9-b93a-ba363f6cbec9" containerName="registry-server" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.627311 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ecdff9b-23e8-4883-9128-37da64316185" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.627973 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.631221 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.631439 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.631786 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.631815 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.631955 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.639818 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm"] Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.693404 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.693465 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.774746 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtklt\" (UniqueName: \"kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.775004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.775166 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.775446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.876844 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtklt\" (UniqueName: \"kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.876922 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.877017 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.877136 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.881450 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.881719 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.881971 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.895170 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtklt\" (UniqueName: \"kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:17 crc kubenswrapper[4869]: I0106 14:37:17.949584 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:18 crc kubenswrapper[4869]: I0106 14:37:18.555750 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm"] Jan 06 14:37:18 crc kubenswrapper[4869]: I0106 14:37:18.741327 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-68794" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="registry-server" probeResult="failure" output=< Jan 06 14:37:18 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 06 14:37:18 crc kubenswrapper[4869]: > Jan 06 14:37:19 crc kubenswrapper[4869]: I0106 14:37:19.549923 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" event={"ID":"3c8a55f5-3919-44f7-b3b4-54397f2e3b11","Type":"ContainerStarted","Data":"e9de728909c76db45e56ef4bd00d02d4a9f456d55aff788afa807b0448d9e6a6"} Jan 06 14:37:19 crc kubenswrapper[4869]: I0106 14:37:19.549984 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" event={"ID":"3c8a55f5-3919-44f7-b3b4-54397f2e3b11","Type":"ContainerStarted","Data":"a4500751968e4d5e6ec3feb82eba4a91ede6f8c4148e5a1bce264ed581edd8c9"} Jan 06 14:37:19 crc kubenswrapper[4869]: I0106 14:37:19.577571 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" podStartSLOduration=2.142361993 podStartE2EDuration="2.577516442s" podCreationTimestamp="2026-01-06 14:37:17 +0000 UTC" firstStartedPulling="2026-01-06 14:37:18.580693943 +0000 UTC m=+2257.120381607" lastFinishedPulling="2026-01-06 14:37:19.015848382 +0000 UTC m=+2257.555536056" observedRunningTime="2026-01-06 14:37:19.571323235 +0000 UTC m=+2258.111010929" watchObservedRunningTime="2026-01-06 14:37:19.577516442 +0000 UTC m=+2258.117204136" Jan 06 14:37:23 crc kubenswrapper[4869]: I0106 14:37:23.591147 4869 generic.go:334] "Generic (PLEG): container finished" podID="3c8a55f5-3919-44f7-b3b4-54397f2e3b11" containerID="e9de728909c76db45e56ef4bd00d02d4a9f456d55aff788afa807b0448d9e6a6" exitCode=0 Jan 06 14:37:23 crc kubenswrapper[4869]: I0106 14:37:23.591250 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" event={"ID":"3c8a55f5-3919-44f7-b3b4-54397f2e3b11","Type":"ContainerDied","Data":"e9de728909c76db45e56ef4bd00d02d4a9f456d55aff788afa807b0448d9e6a6"} Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.027724 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.035122 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtklt\" (UniqueName: \"kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt\") pod \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.040493 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt" (OuterVolumeSpecName: "kube-api-access-jtklt") pod "3c8a55f5-3919-44f7-b3b4-54397f2e3b11" (UID: "3c8a55f5-3919-44f7-b3b4-54397f2e3b11"). InnerVolumeSpecName "kube-api-access-jtklt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.136626 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory\") pod \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.137216 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam\") pod \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.137298 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph\") pod \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\" (UID: \"3c8a55f5-3919-44f7-b3b4-54397f2e3b11\") " Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.138369 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtklt\" (UniqueName: \"kubernetes.io/projected/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-kube-api-access-jtklt\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.140454 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph" (OuterVolumeSpecName: "ceph") pod "3c8a55f5-3919-44f7-b3b4-54397f2e3b11" (UID: "3c8a55f5-3919-44f7-b3b4-54397f2e3b11"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.163758 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c8a55f5-3919-44f7-b3b4-54397f2e3b11" (UID: "3c8a55f5-3919-44f7-b3b4-54397f2e3b11"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.168634 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory" (OuterVolumeSpecName: "inventory") pod "3c8a55f5-3919-44f7-b3b4-54397f2e3b11" (UID: "3c8a55f5-3919-44f7-b3b4-54397f2e3b11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.239052 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.239087 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.239099 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c8a55f5-3919-44f7-b3b4-54397f2e3b11-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.612136 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" event={"ID":"3c8a55f5-3919-44f7-b3b4-54397f2e3b11","Type":"ContainerDied","Data":"a4500751968e4d5e6ec3feb82eba4a91ede6f8c4148e5a1bce264ed581edd8c9"} Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.612197 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4500751968e4d5e6ec3feb82eba4a91ede6f8c4148e5a1bce264ed581edd8c9" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.612197 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.697239 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b"] Jan 06 14:37:25 crc kubenswrapper[4869]: E0106 14:37:25.697697 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8a55f5-3919-44f7-b3b4-54397f2e3b11" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.697715 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8a55f5-3919-44f7-b3b4-54397f2e3b11" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.697922 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8a55f5-3919-44f7-b3b4-54397f2e3b11" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.700682 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.706627 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.708165 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.708305 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.711082 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.711222 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.718602 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b"] Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.858208 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.858955 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tpp5\" (UniqueName: \"kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.859030 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.859071 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.961824 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.961924 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tpp5\" (UniqueName: \"kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.961957 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.961994 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.967631 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.969328 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.973062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:25 crc kubenswrapper[4869]: I0106 14:37:25.980737 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tpp5\" (UniqueName: \"kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:26 crc kubenswrapper[4869]: I0106 14:37:26.020972 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:37:26 crc kubenswrapper[4869]: I0106 14:37:26.519742 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b"] Jan 06 14:37:26 crc kubenswrapper[4869]: I0106 14:37:26.622569 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" event={"ID":"1d87f359-40bb-40c9-b5f4-9b390767b167","Type":"ContainerStarted","Data":"62dffeee4731f16457c87394c41b9c959d04b778239dc2ef0b75c5ef9af2b039"} Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.632097 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" event={"ID":"1d87f359-40bb-40c9-b5f4-9b390767b167","Type":"ContainerStarted","Data":"a4e12b6b70f96c9c4c2488e65fd7fac60a478801c9b7e2a48d26d1daeb3d5e55"} Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.653762 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" podStartSLOduration=2.241504927 podStartE2EDuration="2.65374384s" podCreationTimestamp="2026-01-06 14:37:25 +0000 UTC" firstStartedPulling="2026-01-06 14:37:26.534502417 +0000 UTC m=+2265.074190121" lastFinishedPulling="2026-01-06 14:37:26.94674136 +0000 UTC m=+2265.486429034" observedRunningTime="2026-01-06 14:37:27.648798813 +0000 UTC m=+2266.188486487" watchObservedRunningTime="2026-01-06 14:37:27.65374384 +0000 UTC m=+2266.193431504" Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.704583 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:37:27 crc kubenswrapper[4869]: E0106 14:37:27.704900 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.738113 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.806095 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:27 crc kubenswrapper[4869]: I0106 14:37:27.974840 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:29 crc kubenswrapper[4869]: I0106 14:37:29.647804 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-68794" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="registry-server" containerID="cri-o://8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9" gracePeriod=2 Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.113776 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.142424 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content\") pod \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.142732 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktvxt\" (UniqueName: \"kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt\") pod \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.142805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities\") pod \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\" (UID: \"7d6ac3ca-f2da-4739-871f-d9c18bacb167\") " Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.145259 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities" (OuterVolumeSpecName: "utilities") pod "7d6ac3ca-f2da-4739-871f-d9c18bacb167" (UID: "7d6ac3ca-f2da-4739-871f-d9c18bacb167"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.150580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt" (OuterVolumeSpecName: "kube-api-access-ktvxt") pod "7d6ac3ca-f2da-4739-871f-d9c18bacb167" (UID: "7d6ac3ca-f2da-4739-871f-d9c18bacb167"). InnerVolumeSpecName "kube-api-access-ktvxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.245062 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktvxt\" (UniqueName: \"kubernetes.io/projected/7d6ac3ca-f2da-4739-871f-d9c18bacb167-kube-api-access-ktvxt\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.245104 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.278197 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d6ac3ca-f2da-4739-871f-d9c18bacb167" (UID: "7d6ac3ca-f2da-4739-871f-d9c18bacb167"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.345885 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d6ac3ca-f2da-4739-871f-d9c18bacb167-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.659177 4869 generic.go:334] "Generic (PLEG): container finished" podID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerID="8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9" exitCode=0 Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.659231 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerDied","Data":"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9"} Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.659266 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-68794" event={"ID":"7d6ac3ca-f2da-4739-871f-d9c18bacb167","Type":"ContainerDied","Data":"87c752a332001d6363c1c54e59e92c8deadd115cc7cfc3c4033a20d2ca5f7981"} Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.659293 4869 scope.go:117] "RemoveContainer" containerID="8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.660815 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-68794" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.690044 4869 scope.go:117] "RemoveContainer" containerID="a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.712701 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.724638 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-68794"] Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.740766 4869 scope.go:117] "RemoveContainer" containerID="61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.782876 4869 scope.go:117] "RemoveContainer" containerID="8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9" Jan 06 14:37:30 crc kubenswrapper[4869]: E0106 14:37:30.783394 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9\": container with ID starting with 8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9 not found: ID does not exist" containerID="8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.783458 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9"} err="failed to get container status \"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9\": rpc error: code = NotFound desc = could not find container \"8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9\": container with ID starting with 8cbb8a50c2bf73c88b4cba01f12eea21aac8c39448d98293a70f56075d2a94e9 not found: ID does not exist" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.783560 4869 scope.go:117] "RemoveContainer" containerID="a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca" Jan 06 14:37:30 crc kubenswrapper[4869]: E0106 14:37:30.784015 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca\": container with ID starting with a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca not found: ID does not exist" containerID="a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.784045 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca"} err="failed to get container status \"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca\": rpc error: code = NotFound desc = could not find container \"a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca\": container with ID starting with a9e9814d73fe752e4dbb33d78959c8f2022ba2b30c9f6033c4a16da2ca7496ca not found: ID does not exist" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.784063 4869 scope.go:117] "RemoveContainer" containerID="61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558" Jan 06 14:37:30 crc kubenswrapper[4869]: E0106 14:37:30.784560 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558\": container with ID starting with 61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558 not found: ID does not exist" containerID="61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558" Jan 06 14:37:30 crc kubenswrapper[4869]: I0106 14:37:30.784640 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558"} err="failed to get container status \"61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558\": rpc error: code = NotFound desc = could not find container \"61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558\": container with ID starting with 61fcd884bab5d33e25893146dfbf694ffa7ef3d7c64bc088d483d0e550723558 not found: ID does not exist" Jan 06 14:37:31 crc kubenswrapper[4869]: I0106 14:37:31.715512 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" path="/var/lib/kubelet/pods/7d6ac3ca-f2da-4739-871f-d9c18bacb167/volumes" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.468536 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:35 crc kubenswrapper[4869]: E0106 14:37:35.469530 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="extract-utilities" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.469543 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="extract-utilities" Jan 06 14:37:35 crc kubenswrapper[4869]: E0106 14:37:35.469562 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="extract-content" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.469568 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="extract-content" Jan 06 14:37:35 crc kubenswrapper[4869]: E0106 14:37:35.469583 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="registry-server" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.469590 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="registry-server" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.469843 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6ac3ca-f2da-4739-871f-d9c18bacb167" containerName="registry-server" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.473681 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.491769 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.554318 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.554446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h76wv\" (UniqueName: \"kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.554496 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.655940 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.656050 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h76wv\" (UniqueName: \"kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.656101 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.656608 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.656854 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.688391 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h76wv\" (UniqueName: \"kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv\") pod \"community-operators-j8c5h\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:35 crc kubenswrapper[4869]: I0106 14:37:35.795895 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:36 crc kubenswrapper[4869]: I0106 14:37:36.133864 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:36 crc kubenswrapper[4869]: I0106 14:37:36.714018 4869 generic.go:334] "Generic (PLEG): container finished" podID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerID="6c50034c0efbdeb14186f1c841ac853bca5a5473b7907baed48a1870749bc991" exitCode=0 Jan 06 14:37:36 crc kubenswrapper[4869]: I0106 14:37:36.714133 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerDied","Data":"6c50034c0efbdeb14186f1c841ac853bca5a5473b7907baed48a1870749bc991"} Jan 06 14:37:36 crc kubenswrapper[4869]: I0106 14:37:36.714374 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerStarted","Data":"2c4f579d798a3097758cc612cc6e3907743c1e633c0364ba7511830209a9bf22"} Jan 06 14:37:38 crc kubenswrapper[4869]: I0106 14:37:38.734370 4869 generic.go:334] "Generic (PLEG): container finished" podID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerID="981f29ee4c68af8e471e4697abb4dd6b3d759395957ed625c040201b9b96ea5a" exitCode=0 Jan 06 14:37:38 crc kubenswrapper[4869]: I0106 14:37:38.734504 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerDied","Data":"981f29ee4c68af8e471e4697abb4dd6b3d759395957ed625c040201b9b96ea5a"} Jan 06 14:37:39 crc kubenswrapper[4869]: I0106 14:37:39.704610 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:37:39 crc kubenswrapper[4869]: E0106 14:37:39.705123 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:37:40 crc kubenswrapper[4869]: I0106 14:37:40.753154 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerStarted","Data":"03dadd397207cccc1d898820629bace367c47f3ea3824dc4cd67a7702b7409cd"} Jan 06 14:37:40 crc kubenswrapper[4869]: I0106 14:37:40.775210 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j8c5h" podStartSLOduration=2.645443187 podStartE2EDuration="5.77519508s" podCreationTimestamp="2026-01-06 14:37:35 +0000 UTC" firstStartedPulling="2026-01-06 14:37:36.715725054 +0000 UTC m=+2275.255412718" lastFinishedPulling="2026-01-06 14:37:39.845476947 +0000 UTC m=+2278.385164611" observedRunningTime="2026-01-06 14:37:40.767827454 +0000 UTC m=+2279.307515138" watchObservedRunningTime="2026-01-06 14:37:40.77519508 +0000 UTC m=+2279.314882744" Jan 06 14:37:45 crc kubenswrapper[4869]: I0106 14:37:45.797531 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:45 crc kubenswrapper[4869]: I0106 14:37:45.798211 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:45 crc kubenswrapper[4869]: I0106 14:37:45.859046 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:46 crc kubenswrapper[4869]: I0106 14:37:46.839773 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:46 crc kubenswrapper[4869]: I0106 14:37:46.891888 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:48 crc kubenswrapper[4869]: I0106 14:37:48.818262 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j8c5h" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="registry-server" containerID="cri-o://03dadd397207cccc1d898820629bace367c47f3ea3824dc4cd67a7702b7409cd" gracePeriod=2 Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.827812 4869 generic.go:334] "Generic (PLEG): container finished" podID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerID="03dadd397207cccc1d898820629bace367c47f3ea3824dc4cd67a7702b7409cd" exitCode=0 Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.828156 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerDied","Data":"03dadd397207cccc1d898820629bace367c47f3ea3824dc4cd67a7702b7409cd"} Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.828186 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j8c5h" event={"ID":"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed","Type":"ContainerDied","Data":"2c4f579d798a3097758cc612cc6e3907743c1e633c0364ba7511830209a9bf22"} Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.828198 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4f579d798a3097758cc612cc6e3907743c1e633c0364ba7511830209a9bf22" Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.866032 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.943641 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities\") pod \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.943762 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content\") pod \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.943937 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h76wv\" (UniqueName: \"kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv\") pod \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\" (UID: \"88dc1f2a-b622-43b3-9c76-aa6e0e13ffed\") " Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.944518 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities" (OuterVolumeSpecName: "utilities") pod "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" (UID: "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.949560 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv" (OuterVolumeSpecName: "kube-api-access-h76wv") pod "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" (UID: "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed"). InnerVolumeSpecName "kube-api-access-h76wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:37:49 crc kubenswrapper[4869]: I0106 14:37:49.995072 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" (UID: "88dc1f2a-b622-43b3-9c76-aa6e0e13ffed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.045766 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.046118 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.046251 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h76wv\" (UniqueName: \"kubernetes.io/projected/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed-kube-api-access-h76wv\") on node \"crc\" DevicePath \"\"" Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.835322 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j8c5h" Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.880900 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:50 crc kubenswrapper[4869]: I0106 14:37:50.891532 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j8c5h"] Jan 06 14:37:51 crc kubenswrapper[4869]: I0106 14:37:51.725328 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" path="/var/lib/kubelet/pods/88dc1f2a-b622-43b3-9c76-aa6e0e13ffed/volumes" Jan 06 14:37:53 crc kubenswrapper[4869]: I0106 14:37:53.706719 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:37:53 crc kubenswrapper[4869]: E0106 14:37:53.707495 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:05 crc kubenswrapper[4869]: I0106 14:38:05.705405 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:38:05 crc kubenswrapper[4869]: E0106 14:38:05.706180 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:17 crc kubenswrapper[4869]: I0106 14:38:17.081933 4869 generic.go:334] "Generic (PLEG): container finished" podID="1d87f359-40bb-40c9-b5f4-9b390767b167" containerID="a4e12b6b70f96c9c4c2488e65fd7fac60a478801c9b7e2a48d26d1daeb3d5e55" exitCode=0 Jan 06 14:38:17 crc kubenswrapper[4869]: I0106 14:38:17.082047 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" event={"ID":"1d87f359-40bb-40c9-b5f4-9b390767b167","Type":"ContainerDied","Data":"a4e12b6b70f96c9c4c2488e65fd7fac60a478801c9b7e2a48d26d1daeb3d5e55"} Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.476990 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.665453 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam\") pod \"1d87f359-40bb-40c9-b5f4-9b390767b167\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.665521 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph\") pod \"1d87f359-40bb-40c9-b5f4-9b390767b167\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.665596 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tpp5\" (UniqueName: \"kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5\") pod \"1d87f359-40bb-40c9-b5f4-9b390767b167\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.665636 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory\") pod \"1d87f359-40bb-40c9-b5f4-9b390767b167\" (UID: \"1d87f359-40bb-40c9-b5f4-9b390767b167\") " Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.671607 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph" (OuterVolumeSpecName: "ceph") pod "1d87f359-40bb-40c9-b5f4-9b390767b167" (UID: "1d87f359-40bb-40c9-b5f4-9b390767b167"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.672201 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5" (OuterVolumeSpecName: "kube-api-access-5tpp5") pod "1d87f359-40bb-40c9-b5f4-9b390767b167" (UID: "1d87f359-40bb-40c9-b5f4-9b390767b167"). InnerVolumeSpecName "kube-api-access-5tpp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.692753 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory" (OuterVolumeSpecName: "inventory") pod "1d87f359-40bb-40c9-b5f4-9b390767b167" (UID: "1d87f359-40bb-40c9-b5f4-9b390767b167"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.698248 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d87f359-40bb-40c9-b5f4-9b390767b167" (UID: "1d87f359-40bb-40c9-b5f4-9b390767b167"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.774497 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.774544 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.774558 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tpp5\" (UniqueName: \"kubernetes.io/projected/1d87f359-40bb-40c9-b5f4-9b390767b167-kube-api-access-5tpp5\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:18 crc kubenswrapper[4869]: I0106 14:38:18.774603 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d87f359-40bb-40c9-b5f4-9b390767b167-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.098062 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" event={"ID":"1d87f359-40bb-40c9-b5f4-9b390767b167","Type":"ContainerDied","Data":"62dffeee4731f16457c87394c41b9c959d04b778239dc2ef0b75c5ef9af2b039"} Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.098361 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62dffeee4731f16457c87394c41b9c959d04b778239dc2ef0b75c5ef9af2b039" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.098153 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.173874 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hgcbw"] Jan 06 14:38:19 crc kubenswrapper[4869]: E0106 14:38:19.174290 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="registry-server" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174313 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="registry-server" Jan 06 14:38:19 crc kubenswrapper[4869]: E0106 14:38:19.174334 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d87f359-40bb-40c9-b5f4-9b390767b167" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174346 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d87f359-40bb-40c9-b5f4-9b390767b167" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:19 crc kubenswrapper[4869]: E0106 14:38:19.174390 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="extract-utilities" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174399 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="extract-utilities" Jan 06 14:38:19 crc kubenswrapper[4869]: E0106 14:38:19.174411 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="extract-content" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174419 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="extract-content" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174580 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d87f359-40bb-40c9-b5f4-9b390767b167" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.174604 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="88dc1f2a-b622-43b3-9c76-aa6e0e13ffed" containerName="registry-server" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.175236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.178620 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.180988 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hgcbw"] Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.181941 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.181983 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.182051 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzr9s\" (UniqueName: \"kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.182077 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.182118 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.183592 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.183624 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.183709 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.283008 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzr9s\" (UniqueName: \"kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.283052 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.283092 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.283144 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.287435 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.287981 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.288961 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.302833 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzr9s\" (UniqueName: \"kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s\") pod \"ssh-known-hosts-edpm-deployment-hgcbw\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:19 crc kubenswrapper[4869]: I0106 14:38:19.492028 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:20 crc kubenswrapper[4869]: I0106 14:38:20.042296 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hgcbw"] Jan 06 14:38:20 crc kubenswrapper[4869]: I0106 14:38:20.108227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" event={"ID":"c084a08a-3404-4e7e-b216-d2426f9e0a48","Type":"ContainerStarted","Data":"ccc672e19356062d9a4bae63deefcc901bf744c2786d7395dacfc429a7546377"} Jan 06 14:38:20 crc kubenswrapper[4869]: I0106 14:38:20.708075 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:38:20 crc kubenswrapper[4869]: E0106 14:38:20.711196 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:21 crc kubenswrapper[4869]: I0106 14:38:21.116580 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" event={"ID":"c084a08a-3404-4e7e-b216-d2426f9e0a48","Type":"ContainerStarted","Data":"f1cf17772fa7bb74d2fc92e7f248a8dd9d27e90bdeb3ed1552f6b996887f1b04"} Jan 06 14:38:21 crc kubenswrapper[4869]: I0106 14:38:21.132174 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" podStartSLOduration=1.508108615 podStartE2EDuration="2.132157141s" podCreationTimestamp="2026-01-06 14:38:19 +0000 UTC" firstStartedPulling="2026-01-06 14:38:20.04827533 +0000 UTC m=+2318.587962994" lastFinishedPulling="2026-01-06 14:38:20.672323856 +0000 UTC m=+2319.212011520" observedRunningTime="2026-01-06 14:38:21.131486596 +0000 UTC m=+2319.671174270" watchObservedRunningTime="2026-01-06 14:38:21.132157141 +0000 UTC m=+2319.671844805" Jan 06 14:38:31 crc kubenswrapper[4869]: I0106 14:38:31.190585 4869 generic.go:334] "Generic (PLEG): container finished" podID="c084a08a-3404-4e7e-b216-d2426f9e0a48" containerID="f1cf17772fa7bb74d2fc92e7f248a8dd9d27e90bdeb3ed1552f6b996887f1b04" exitCode=0 Jan 06 14:38:31 crc kubenswrapper[4869]: I0106 14:38:31.190698 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" event={"ID":"c084a08a-3404-4e7e-b216-d2426f9e0a48","Type":"ContainerDied","Data":"f1cf17772fa7bb74d2fc92e7f248a8dd9d27e90bdeb3ed1552f6b996887f1b04"} Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.625604 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.704711 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:38:32 crc kubenswrapper[4869]: E0106 14:38:32.705117 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.786632 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzr9s\" (UniqueName: \"kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s\") pod \"c084a08a-3404-4e7e-b216-d2426f9e0a48\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.786715 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0\") pod \"c084a08a-3404-4e7e-b216-d2426f9e0a48\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.786764 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph\") pod \"c084a08a-3404-4e7e-b216-d2426f9e0a48\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.786805 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam\") pod \"c084a08a-3404-4e7e-b216-d2426f9e0a48\" (UID: \"c084a08a-3404-4e7e-b216-d2426f9e0a48\") " Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.798156 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph" (OuterVolumeSpecName: "ceph") pod "c084a08a-3404-4e7e-b216-d2426f9e0a48" (UID: "c084a08a-3404-4e7e-b216-d2426f9e0a48"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.798167 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s" (OuterVolumeSpecName: "kube-api-access-xzr9s") pod "c084a08a-3404-4e7e-b216-d2426f9e0a48" (UID: "c084a08a-3404-4e7e-b216-d2426f9e0a48"). InnerVolumeSpecName "kube-api-access-xzr9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.821157 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c084a08a-3404-4e7e-b216-d2426f9e0a48" (UID: "c084a08a-3404-4e7e-b216-d2426f9e0a48"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.822599 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c084a08a-3404-4e7e-b216-d2426f9e0a48" (UID: "c084a08a-3404-4e7e-b216-d2426f9e0a48"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.889098 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzr9s\" (UniqueName: \"kubernetes.io/projected/c084a08a-3404-4e7e-b216-d2426f9e0a48-kube-api-access-xzr9s\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.889128 4869 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.889137 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:32 crc kubenswrapper[4869]: I0106 14:38:32.889146 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c084a08a-3404-4e7e-b216-d2426f9e0a48-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.210073 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" event={"ID":"c084a08a-3404-4e7e-b216-d2426f9e0a48","Type":"ContainerDied","Data":"ccc672e19356062d9a4bae63deefcc901bf744c2786d7395dacfc429a7546377"} Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.210117 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc672e19356062d9a4bae63deefcc901bf744c2786d7395dacfc429a7546377" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.210162 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hgcbw" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.285350 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp"] Jan 06 14:38:33 crc kubenswrapper[4869]: E0106 14:38:33.285971 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c084a08a-3404-4e7e-b216-d2426f9e0a48" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.285989 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c084a08a-3404-4e7e-b216-d2426f9e0a48" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.286178 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c084a08a-3404-4e7e-b216-d2426f9e0a48" containerName="ssh-known-hosts-edpm-deployment" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.286839 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.288466 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.288942 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.289048 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.289152 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.291601 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.307275 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp"] Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.409190 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pfq\" (UniqueName: \"kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.409264 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.409370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.409394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.511452 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.511934 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.512112 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6pfq\" (UniqueName: \"kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.512233 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.515932 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.525899 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.531247 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.534363 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6pfq\" (UniqueName: \"kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-g8qtp\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:33 crc kubenswrapper[4869]: I0106 14:38:33.610032 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:34 crc kubenswrapper[4869]: I0106 14:38:34.134508 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp"] Jan 06 14:38:34 crc kubenswrapper[4869]: I0106 14:38:34.218317 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" event={"ID":"90cfd369-16f0-4bf5-99df-8884d9db5240","Type":"ContainerStarted","Data":"e1778012ff0fa8c1536b7106de613b46bfe5a61bfed7b73f757a667965ea5b2f"} Jan 06 14:38:35 crc kubenswrapper[4869]: I0106 14:38:35.230508 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" event={"ID":"90cfd369-16f0-4bf5-99df-8884d9db5240","Type":"ContainerStarted","Data":"245b585d7c452cb7e378b33259d1e4e50293c83ea088171cc54f5ad5ba79b5ed"} Jan 06 14:38:35 crc kubenswrapper[4869]: I0106 14:38:35.264317 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" podStartSLOduration=1.8083347779999999 podStartE2EDuration="2.264291592s" podCreationTimestamp="2026-01-06 14:38:33 +0000 UTC" firstStartedPulling="2026-01-06 14:38:34.141949764 +0000 UTC m=+2332.681637468" lastFinishedPulling="2026-01-06 14:38:34.597906618 +0000 UTC m=+2333.137594282" observedRunningTime="2026-01-06 14:38:35.254476448 +0000 UTC m=+2333.794164122" watchObservedRunningTime="2026-01-06 14:38:35.264291592 +0000 UTC m=+2333.803979256" Jan 06 14:38:44 crc kubenswrapper[4869]: I0106 14:38:44.309599 4869 generic.go:334] "Generic (PLEG): container finished" podID="90cfd369-16f0-4bf5-99df-8884d9db5240" containerID="245b585d7c452cb7e378b33259d1e4e50293c83ea088171cc54f5ad5ba79b5ed" exitCode=0 Jan 06 14:38:44 crc kubenswrapper[4869]: I0106 14:38:44.309700 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" event={"ID":"90cfd369-16f0-4bf5-99df-8884d9db5240","Type":"ContainerDied","Data":"245b585d7c452cb7e378b33259d1e4e50293c83ea088171cc54f5ad5ba79b5ed"} Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.717893 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.904583 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam\") pod \"90cfd369-16f0-4bf5-99df-8884d9db5240\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.904756 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6pfq\" (UniqueName: \"kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq\") pod \"90cfd369-16f0-4bf5-99df-8884d9db5240\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.904848 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph\") pod \"90cfd369-16f0-4bf5-99df-8884d9db5240\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.904898 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory\") pod \"90cfd369-16f0-4bf5-99df-8884d9db5240\" (UID: \"90cfd369-16f0-4bf5-99df-8884d9db5240\") " Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.917993 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq" (OuterVolumeSpecName: "kube-api-access-w6pfq") pod "90cfd369-16f0-4bf5-99df-8884d9db5240" (UID: "90cfd369-16f0-4bf5-99df-8884d9db5240"). InnerVolumeSpecName "kube-api-access-w6pfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.926284 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph" (OuterVolumeSpecName: "ceph") pod "90cfd369-16f0-4bf5-99df-8884d9db5240" (UID: "90cfd369-16f0-4bf5-99df-8884d9db5240"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.938829 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "90cfd369-16f0-4bf5-99df-8884d9db5240" (UID: "90cfd369-16f0-4bf5-99df-8884d9db5240"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:45 crc kubenswrapper[4869]: I0106 14:38:45.940846 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory" (OuterVolumeSpecName: "inventory") pod "90cfd369-16f0-4bf5-99df-8884d9db5240" (UID: "90cfd369-16f0-4bf5-99df-8884d9db5240"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.007011 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6pfq\" (UniqueName: \"kubernetes.io/projected/90cfd369-16f0-4bf5-99df-8884d9db5240-kube-api-access-w6pfq\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.007053 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.007066 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.007078 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/90cfd369-16f0-4bf5-99df-8884d9db5240-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.326513 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" event={"ID":"90cfd369-16f0-4bf5-99df-8884d9db5240","Type":"ContainerDied","Data":"e1778012ff0fa8c1536b7106de613b46bfe5a61bfed7b73f757a667965ea5b2f"} Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.326569 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-g8qtp" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.326580 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1778012ff0fa8c1536b7106de613b46bfe5a61bfed7b73f757a667965ea5b2f" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.415447 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z"] Jan 06 14:38:46 crc kubenswrapper[4869]: E0106 14:38:46.415834 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cfd369-16f0-4bf5-99df-8884d9db5240" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.415857 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cfd369-16f0-4bf5-99df-8884d9db5240" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.416040 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cfd369-16f0-4bf5-99df-8884d9db5240" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.416618 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.418540 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.418646 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.418646 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.419421 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.422842 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.435205 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z"] Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.516724 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.516788 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq95k\" (UniqueName: \"kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.516832 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.517235 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.619241 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.619297 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq95k\" (UniqueName: \"kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.619336 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.619413 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.624527 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.624758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.624781 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.636843 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq95k\" (UniqueName: \"kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:46 crc kubenswrapper[4869]: I0106 14:38:46.738155 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:38:47 crc kubenswrapper[4869]: I0106 14:38:47.280842 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z"] Jan 06 14:38:47 crc kubenswrapper[4869]: I0106 14:38:47.338510 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" event={"ID":"45ba022e-05a0-419d-ae5a-4e77bfc47b8c","Type":"ContainerStarted","Data":"76f139b2c0c54b20d675fc7e1a8f8fc465046ae133a97ad8371107b2b887e465"} Jan 06 14:38:47 crc kubenswrapper[4869]: I0106 14:38:47.705495 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:38:47 crc kubenswrapper[4869]: E0106 14:38:47.705852 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:48 crc kubenswrapper[4869]: I0106 14:38:48.348453 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" event={"ID":"45ba022e-05a0-419d-ae5a-4e77bfc47b8c","Type":"ContainerStarted","Data":"fa3ffd4df85be8f4b7f503f9369d9499977085e336fb75b5fa17363235993abb"} Jan 06 14:38:48 crc kubenswrapper[4869]: I0106 14:38:48.372722 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" podStartSLOduration=1.958518653 podStartE2EDuration="2.372705073s" podCreationTimestamp="2026-01-06 14:38:46 +0000 UTC" firstStartedPulling="2026-01-06 14:38:47.285249775 +0000 UTC m=+2345.824937439" lastFinishedPulling="2026-01-06 14:38:47.699436205 +0000 UTC m=+2346.239123859" observedRunningTime="2026-01-06 14:38:48.371427743 +0000 UTC m=+2346.911115407" watchObservedRunningTime="2026-01-06 14:38:48.372705073 +0000 UTC m=+2346.912392737" Jan 06 14:38:58 crc kubenswrapper[4869]: I0106 14:38:58.440900 4869 generic.go:334] "Generic (PLEG): container finished" podID="45ba022e-05a0-419d-ae5a-4e77bfc47b8c" containerID="fa3ffd4df85be8f4b7f503f9369d9499977085e336fb75b5fa17363235993abb" exitCode=0 Jan 06 14:38:58 crc kubenswrapper[4869]: I0106 14:38:58.441043 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" event={"ID":"45ba022e-05a0-419d-ae5a-4e77bfc47b8c","Type":"ContainerDied","Data":"fa3ffd4df85be8f4b7f503f9369d9499977085e336fb75b5fa17363235993abb"} Jan 06 14:38:59 crc kubenswrapper[4869]: I0106 14:38:59.704557 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:38:59 crc kubenswrapper[4869]: E0106 14:38:59.705247 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:38:59 crc kubenswrapper[4869]: I0106 14:38:59.906763 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.001627 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam\") pod \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.002157 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph\") pod \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.002208 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq95k\" (UniqueName: \"kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k\") pod \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.002424 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory\") pod \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\" (UID: \"45ba022e-05a0-419d-ae5a-4e77bfc47b8c\") " Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.010876 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k" (OuterVolumeSpecName: "kube-api-access-bq95k") pod "45ba022e-05a0-419d-ae5a-4e77bfc47b8c" (UID: "45ba022e-05a0-419d-ae5a-4e77bfc47b8c"). InnerVolumeSpecName "kube-api-access-bq95k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.021521 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph" (OuterVolumeSpecName: "ceph") pod "45ba022e-05a0-419d-ae5a-4e77bfc47b8c" (UID: "45ba022e-05a0-419d-ae5a-4e77bfc47b8c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.033242 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory" (OuterVolumeSpecName: "inventory") pod "45ba022e-05a0-419d-ae5a-4e77bfc47b8c" (UID: "45ba022e-05a0-419d-ae5a-4e77bfc47b8c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.034833 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "45ba022e-05a0-419d-ae5a-4e77bfc47b8c" (UID: "45ba022e-05a0-419d-ae5a-4e77bfc47b8c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.104389 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.104428 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq95k\" (UniqueName: \"kubernetes.io/projected/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-kube-api-access-bq95k\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.104441 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.104450 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/45ba022e-05a0-419d-ae5a-4e77bfc47b8c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.461605 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" event={"ID":"45ba022e-05a0-419d-ae5a-4e77bfc47b8c","Type":"ContainerDied","Data":"76f139b2c0c54b20d675fc7e1a8f8fc465046ae133a97ad8371107b2b887e465"} Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.461693 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76f139b2c0c54b20d675fc7e1a8f8fc465046ae133a97ad8371107b2b887e465" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.461742 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.597955 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt"] Jan 06 14:39:00 crc kubenswrapper[4869]: E0106 14:39:00.598402 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ba022e-05a0-419d-ae5a-4e77bfc47b8c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.598430 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ba022e-05a0-419d-ae5a-4e77bfc47b8c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.598653 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ba022e-05a0-419d-ae5a-4e77bfc47b8c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.599223 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.602585 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.603101 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.603303 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.603520 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.604298 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.604300 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.604428 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.608017 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.611545 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt"] Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.612862 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.612924 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjfm\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613059 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613098 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613125 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613158 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613207 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613236 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613259 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613333 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613387 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613475 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.613529 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715373 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715417 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715443 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715473 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715504 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.715539 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.716789 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.716845 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjfm\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.716951 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.717005 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.717022 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.717045 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.717110 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.722336 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.724298 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.724766 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.725377 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.727606 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.732046 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.734879 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.736388 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.737875 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.739614 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.739955 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.741948 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.742444 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjfm\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:00 crc kubenswrapper[4869]: I0106 14:39:00.915776 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:01 crc kubenswrapper[4869]: I0106 14:39:01.531837 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt"] Jan 06 14:39:01 crc kubenswrapper[4869]: I0106 14:39:01.541687 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:39:02 crc kubenswrapper[4869]: I0106 14:39:02.488746 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" event={"ID":"319f344b-5374-42d9-bfea-f25f3717ccf9","Type":"ContainerStarted","Data":"86fea0d7c419a8352f12855961b2f5f2900358fdd703faca6b631c2e9986ac99"} Jan 06 14:39:02 crc kubenswrapper[4869]: I0106 14:39:02.489324 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" event={"ID":"319f344b-5374-42d9-bfea-f25f3717ccf9","Type":"ContainerStarted","Data":"8c221fc0e8afa1d87c74c69aeac099f182adceffcd8537d4908f1190e699d122"} Jan 06 14:39:02 crc kubenswrapper[4869]: I0106 14:39:02.513305 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" podStartSLOduration=1.997346913 podStartE2EDuration="2.513274865s" podCreationTimestamp="2026-01-06 14:39:00 +0000 UTC" firstStartedPulling="2026-01-06 14:39:01.541407409 +0000 UTC m=+2360.081095073" lastFinishedPulling="2026-01-06 14:39:02.057335321 +0000 UTC m=+2360.597023025" observedRunningTime="2026-01-06 14:39:02.511620916 +0000 UTC m=+2361.051308580" watchObservedRunningTime="2026-01-06 14:39:02.513274865 +0000 UTC m=+2361.052962529" Jan 06 14:39:11 crc kubenswrapper[4869]: I0106 14:39:11.712313 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:39:11 crc kubenswrapper[4869]: E0106 14:39:11.713062 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:39:24 crc kubenswrapper[4869]: I0106 14:39:24.704163 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:39:24 crc kubenswrapper[4869]: E0106 14:39:24.705264 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:39:24 crc kubenswrapper[4869]: I0106 14:39:24.996751 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.002137 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.009088 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.103709 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.103781 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7q7s\" (UniqueName: \"kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.104105 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.206210 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.206843 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7q7s\" (UniqueName: \"kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.206928 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.206965 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.207448 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.233765 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7q7s\" (UniqueName: \"kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s\") pod \"redhat-marketplace-7g8v5\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.351001 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:25 crc kubenswrapper[4869]: I0106 14:39:25.840500 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:26 crc kubenswrapper[4869]: I0106 14:39:26.718108 4869 generic.go:334] "Generic (PLEG): container finished" podID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerID="2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f" exitCode=0 Jan 06 14:39:26 crc kubenswrapper[4869]: I0106 14:39:26.718192 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerDied","Data":"2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f"} Jan 06 14:39:26 crc kubenswrapper[4869]: I0106 14:39:26.718752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerStarted","Data":"7c6172035306ef2f85c3a4a3c6583a594fa6d0e6709d34f47437d86a58f7beab"} Jan 06 14:39:28 crc kubenswrapper[4869]: I0106 14:39:28.739946 4869 generic.go:334] "Generic (PLEG): container finished" podID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerID="181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab" exitCode=0 Jan 06 14:39:28 crc kubenswrapper[4869]: I0106 14:39:28.740269 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerDied","Data":"181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab"} Jan 06 14:39:29 crc kubenswrapper[4869]: I0106 14:39:29.749055 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerStarted","Data":"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453"} Jan 06 14:39:29 crc kubenswrapper[4869]: I0106 14:39:29.781786 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7g8v5" podStartSLOduration=3.26164846 podStartE2EDuration="5.781758822s" podCreationTimestamp="2026-01-06 14:39:24 +0000 UTC" firstStartedPulling="2026-01-06 14:39:26.721210554 +0000 UTC m=+2385.260898248" lastFinishedPulling="2026-01-06 14:39:29.241320936 +0000 UTC m=+2387.781008610" observedRunningTime="2026-01-06 14:39:29.7729228 +0000 UTC m=+2388.312610474" watchObservedRunningTime="2026-01-06 14:39:29.781758822 +0000 UTC m=+2388.321446496" Jan 06 14:39:35 crc kubenswrapper[4869]: I0106 14:39:35.351625 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:35 crc kubenswrapper[4869]: I0106 14:39:35.352532 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:35 crc kubenswrapper[4869]: I0106 14:39:35.440342 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:35 crc kubenswrapper[4869]: I0106 14:39:35.859115 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:35 crc kubenswrapper[4869]: I0106 14:39:35.909036 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:37 crc kubenswrapper[4869]: I0106 14:39:37.817228 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7g8v5" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="registry-server" containerID="cri-o://af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453" gracePeriod=2 Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.253980 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.373533 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7q7s\" (UniqueName: \"kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s\") pod \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.373649 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content\") pod \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.373694 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities\") pod \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\" (UID: \"88fe6ba1-92c8-48e9-b072-4c5427b357a1\") " Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.374765 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities" (OuterVolumeSpecName: "utilities") pod "88fe6ba1-92c8-48e9-b072-4c5427b357a1" (UID: "88fe6ba1-92c8-48e9-b072-4c5427b357a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.381026 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s" (OuterVolumeSpecName: "kube-api-access-x7q7s") pod "88fe6ba1-92c8-48e9-b072-4c5427b357a1" (UID: "88fe6ba1-92c8-48e9-b072-4c5427b357a1"). InnerVolumeSpecName "kube-api-access-x7q7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.400459 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88fe6ba1-92c8-48e9-b072-4c5427b357a1" (UID: "88fe6ba1-92c8-48e9-b072-4c5427b357a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.475897 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.475951 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88fe6ba1-92c8-48e9-b072-4c5427b357a1-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.475961 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7q7s\" (UniqueName: \"kubernetes.io/projected/88fe6ba1-92c8-48e9-b072-4c5427b357a1-kube-api-access-x7q7s\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.705387 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:39:38 crc kubenswrapper[4869]: E0106 14:39:38.705936 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.828908 4869 generic.go:334] "Generic (PLEG): container finished" podID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerID="af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453" exitCode=0 Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.828976 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerDied","Data":"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453"} Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.829017 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7g8v5" event={"ID":"88fe6ba1-92c8-48e9-b072-4c5427b357a1","Type":"ContainerDied","Data":"7c6172035306ef2f85c3a4a3c6583a594fa6d0e6709d34f47437d86a58f7beab"} Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.829041 4869 scope.go:117] "RemoveContainer" containerID="af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.829197 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7g8v5" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.854090 4869 scope.go:117] "RemoveContainer" containerID="181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.877519 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.884741 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7g8v5"] Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.895239 4869 scope.go:117] "RemoveContainer" containerID="2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.921396 4869 scope.go:117] "RemoveContainer" containerID="af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453" Jan 06 14:39:38 crc kubenswrapper[4869]: E0106 14:39:38.921862 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453\": container with ID starting with af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453 not found: ID does not exist" containerID="af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.921892 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453"} err="failed to get container status \"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453\": rpc error: code = NotFound desc = could not find container \"af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453\": container with ID starting with af0852115a4b3b650e511cd7048edc2a13eaf4b71895cc9cd9cfc56298ea6453 not found: ID does not exist" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.921913 4869 scope.go:117] "RemoveContainer" containerID="181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab" Jan 06 14:39:38 crc kubenswrapper[4869]: E0106 14:39:38.922243 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab\": container with ID starting with 181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab not found: ID does not exist" containerID="181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.922274 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab"} err="failed to get container status \"181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab\": rpc error: code = NotFound desc = could not find container \"181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab\": container with ID starting with 181d60ec9d07ce82624298eb38f449e3a1acd066762d6d57e5491f363af948ab not found: ID does not exist" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.922291 4869 scope.go:117] "RemoveContainer" containerID="2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f" Jan 06 14:39:38 crc kubenswrapper[4869]: E0106 14:39:38.922569 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f\": container with ID starting with 2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f not found: ID does not exist" containerID="2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f" Jan 06 14:39:38 crc kubenswrapper[4869]: I0106 14:39:38.922614 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f"} err="failed to get container status \"2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f\": rpc error: code = NotFound desc = could not find container \"2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f\": container with ID starting with 2da6c76c878d9f97cb032fbd368d037c817b1bc45e961b429374eeb83c68b51f not found: ID does not exist" Jan 06 14:39:39 crc kubenswrapper[4869]: I0106 14:39:39.714469 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" path="/var/lib/kubelet/pods/88fe6ba1-92c8-48e9-b072-4c5427b357a1/volumes" Jan 06 14:39:39 crc kubenswrapper[4869]: I0106 14:39:39.837746 4869 generic.go:334] "Generic (PLEG): container finished" podID="319f344b-5374-42d9-bfea-f25f3717ccf9" containerID="86fea0d7c419a8352f12855961b2f5f2900358fdd703faca6b631c2e9986ac99" exitCode=0 Jan 06 14:39:39 crc kubenswrapper[4869]: I0106 14:39:39.837867 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" event={"ID":"319f344b-5374-42d9-bfea-f25f3717ccf9","Type":"ContainerDied","Data":"86fea0d7c419a8352f12855961b2f5f2900358fdd703faca6b631c2e9986ac99"} Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.243255 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333307 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333368 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333395 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333422 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333446 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333498 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333534 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333557 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333625 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333692 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjfm\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333710 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.333739 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph\") pod \"319f344b-5374-42d9-bfea-f25f3717ccf9\" (UID: \"319f344b-5374-42d9-bfea-f25f3717ccf9\") " Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.344118 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.344209 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.351580 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.356728 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.357359 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph" (OuterVolumeSpecName: "ceph") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.357550 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.357553 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.363517 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.373216 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.373573 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm" (OuterVolumeSpecName: "kube-api-access-cnjfm") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "kube-api-access-cnjfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.373566 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.389767 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory" (OuterVolumeSpecName: "inventory") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.408816 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "319f344b-5374-42d9-bfea-f25f3717ccf9" (UID: "319f344b-5374-42d9-bfea-f25f3717ccf9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437404 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437444 4869 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437461 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437477 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437494 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437507 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437550 4869 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437566 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437582 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjfm\" (UniqueName: \"kubernetes.io/projected/319f344b-5374-42d9-bfea-f25f3717ccf9-kube-api-access-cnjfm\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437594 4869 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437630 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437642 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.437655 4869 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319f344b-5374-42d9-bfea-f25f3717ccf9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.859009 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" event={"ID":"319f344b-5374-42d9-bfea-f25f3717ccf9","Type":"ContainerDied","Data":"8c221fc0e8afa1d87c74c69aeac099f182adceffcd8537d4908f1190e699d122"} Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.859400 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c221fc0e8afa1d87c74c69aeac099f182adceffcd8537d4908f1190e699d122" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.859112 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.964530 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh"] Jan 06 14:39:41 crc kubenswrapper[4869]: E0106 14:39:41.964976 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319f344b-5374-42d9-bfea-f25f3717ccf9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.964994 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="319f344b-5374-42d9-bfea-f25f3717ccf9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:41 crc kubenswrapper[4869]: E0106 14:39:41.965002 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="registry-server" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.965008 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="registry-server" Jan 06 14:39:41 crc kubenswrapper[4869]: E0106 14:39:41.965022 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="extract-content" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.965028 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="extract-content" Jan 06 14:39:41 crc kubenswrapper[4869]: E0106 14:39:41.965042 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="extract-utilities" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.965048 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="extract-utilities" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.966602 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="319f344b-5374-42d9-bfea-f25f3717ccf9" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.966636 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="88fe6ba1-92c8-48e9-b072-4c5427b357a1" containerName="registry-server" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.967248 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.969714 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.969962 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.970151 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.970206 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.970304 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:39:41 crc kubenswrapper[4869]: I0106 14:39:41.991251 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh"] Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.054457 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.054527 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.054551 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc42j\" (UniqueName: \"kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.054929 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.157193 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.157318 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.157378 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.157403 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc42j\" (UniqueName: \"kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.160829 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.161183 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.174259 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.174962 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc42j\" (UniqueName: \"kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.286485 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.629003 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh"] Jan 06 14:39:42 crc kubenswrapper[4869]: I0106 14:39:42.867900 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" event={"ID":"a9894b84-99cf-4a02-8d21-3795a64be01a","Type":"ContainerStarted","Data":"afe10bfdac225dccb7c6bbf3c1259efb19bb234c7d64b41ad8d40a72a4694a61"} Jan 06 14:39:43 crc kubenswrapper[4869]: I0106 14:39:43.068102 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:39:43 crc kubenswrapper[4869]: I0106 14:39:43.888114 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" event={"ID":"a9894b84-99cf-4a02-8d21-3795a64be01a","Type":"ContainerStarted","Data":"002c716f6da6c93a5c88fe690581ca547bb28d8fe454550742f180c9c4913db2"} Jan 06 14:39:43 crc kubenswrapper[4869]: I0106 14:39:43.911502 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" podStartSLOduration=2.48392198 podStartE2EDuration="2.911478932s" podCreationTimestamp="2026-01-06 14:39:41 +0000 UTC" firstStartedPulling="2026-01-06 14:39:42.637907591 +0000 UTC m=+2401.177595255" lastFinishedPulling="2026-01-06 14:39:43.065464533 +0000 UTC m=+2401.605152207" observedRunningTime="2026-01-06 14:39:43.910488789 +0000 UTC m=+2402.450176483" watchObservedRunningTime="2026-01-06 14:39:43.911478932 +0000 UTC m=+2402.451166636" Jan 06 14:39:49 crc kubenswrapper[4869]: I0106 14:39:49.705003 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:39:49 crc kubenswrapper[4869]: E0106 14:39:49.706195 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:39:49 crc kubenswrapper[4869]: I0106 14:39:49.966753 4869 generic.go:334] "Generic (PLEG): container finished" podID="a9894b84-99cf-4a02-8d21-3795a64be01a" containerID="002c716f6da6c93a5c88fe690581ca547bb28d8fe454550742f180c9c4913db2" exitCode=0 Jan 06 14:39:49 crc kubenswrapper[4869]: I0106 14:39:49.966828 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" event={"ID":"a9894b84-99cf-4a02-8d21-3795a64be01a","Type":"ContainerDied","Data":"002c716f6da6c93a5c88fe690581ca547bb28d8fe454550742f180c9c4913db2"} Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.389105 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.438546 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph\") pod \"a9894b84-99cf-4a02-8d21-3795a64be01a\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.445957 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph" (OuterVolumeSpecName: "ceph") pod "a9894b84-99cf-4a02-8d21-3795a64be01a" (UID: "a9894b84-99cf-4a02-8d21-3795a64be01a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.540739 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory\") pod \"a9894b84-99cf-4a02-8d21-3795a64be01a\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.540862 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam\") pod \"a9894b84-99cf-4a02-8d21-3795a64be01a\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.541107 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc42j\" (UniqueName: \"kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j\") pod \"a9894b84-99cf-4a02-8d21-3795a64be01a\" (UID: \"a9894b84-99cf-4a02-8d21-3795a64be01a\") " Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.541629 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.544457 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j" (OuterVolumeSpecName: "kube-api-access-pc42j") pod "a9894b84-99cf-4a02-8d21-3795a64be01a" (UID: "a9894b84-99cf-4a02-8d21-3795a64be01a"). InnerVolumeSpecName "kube-api-access-pc42j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.565497 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory" (OuterVolumeSpecName: "inventory") pod "a9894b84-99cf-4a02-8d21-3795a64be01a" (UID: "a9894b84-99cf-4a02-8d21-3795a64be01a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.567846 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a9894b84-99cf-4a02-8d21-3795a64be01a" (UID: "a9894b84-99cf-4a02-8d21-3795a64be01a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.643990 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.644028 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc42j\" (UniqueName: \"kubernetes.io/projected/a9894b84-99cf-4a02-8d21-3795a64be01a-kube-api-access-pc42j\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.644040 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9894b84-99cf-4a02-8d21-3795a64be01a-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.985102 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" event={"ID":"a9894b84-99cf-4a02-8d21-3795a64be01a","Type":"ContainerDied","Data":"afe10bfdac225dccb7c6bbf3c1259efb19bb234c7d64b41ad8d40a72a4694a61"} Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.985188 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe10bfdac225dccb7c6bbf3c1259efb19bb234c7d64b41ad8d40a72a4694a61" Jan 06 14:39:51 crc kubenswrapper[4869]: I0106 14:39:51.985133 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.088954 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5"] Jan 06 14:39:52 crc kubenswrapper[4869]: E0106 14:39:52.089387 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9894b84-99cf-4a02-8d21-3795a64be01a" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.089411 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9894b84-99cf-4a02-8d21-3795a64be01a" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.089626 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9894b84-99cf-4a02-8d21-3795a64be01a" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.090396 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.092872 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.093533 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.093754 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.094941 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.094941 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.095698 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.148349 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5"] Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjxm\" (UniqueName: \"kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152483 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152531 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152559 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152622 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.152650 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.255918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjxm\" (UniqueName: \"kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.256081 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.256152 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.256195 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.256296 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.256337 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.258359 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.261922 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.262758 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.265431 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.265516 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.282609 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjxm\" (UniqueName: \"kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d7lg5\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.449477 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:39:52 crc kubenswrapper[4869]: I0106 14:39:52.978951 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5"] Jan 06 14:39:53 crc kubenswrapper[4869]: I0106 14:39:53.007503 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" event={"ID":"b5f1c551-161d-40cc-a7bb-475eab4b0f98","Type":"ContainerStarted","Data":"5043ffa483dc5c8fc22a9cd7dea9b95b9eff45e1a6600e9ac44b46527517f1c0"} Jan 06 14:39:54 crc kubenswrapper[4869]: I0106 14:39:54.021461 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" event={"ID":"b5f1c551-161d-40cc-a7bb-475eab4b0f98","Type":"ContainerStarted","Data":"0a0c008d6b4acf10f1c308513344bc8eaa815fdce756ac82a1af08f1203fd406"} Jan 06 14:39:54 crc kubenswrapper[4869]: I0106 14:39:54.047999 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" podStartSLOduration=1.611343186 podStartE2EDuration="2.047973676s" podCreationTimestamp="2026-01-06 14:39:52 +0000 UTC" firstStartedPulling="2026-01-06 14:39:52.982087705 +0000 UTC m=+2411.521775369" lastFinishedPulling="2026-01-06 14:39:53.418718175 +0000 UTC m=+2411.958405859" observedRunningTime="2026-01-06 14:39:54.044302538 +0000 UTC m=+2412.583990212" watchObservedRunningTime="2026-01-06 14:39:54.047973676 +0000 UTC m=+2412.587661360" Jan 06 14:40:00 crc kubenswrapper[4869]: I0106 14:40:00.704348 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:40:00 crc kubenswrapper[4869]: E0106 14:40:00.705410 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:40:11 crc kubenswrapper[4869]: I0106 14:40:11.711794 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:40:11 crc kubenswrapper[4869]: E0106 14:40:11.712951 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:40:23 crc kubenswrapper[4869]: I0106 14:40:23.704406 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:40:23 crc kubenswrapper[4869]: E0106 14:40:23.706468 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:40:36 crc kubenswrapper[4869]: I0106 14:40:36.705252 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:40:36 crc kubenswrapper[4869]: E0106 14:40:36.706114 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:40:49 crc kubenswrapper[4869]: I0106 14:40:49.704444 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:40:49 crc kubenswrapper[4869]: E0106 14:40:49.705321 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:41:03 crc kubenswrapper[4869]: I0106 14:41:03.704694 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:41:03 crc kubenswrapper[4869]: E0106 14:41:03.705739 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:41:16 crc kubenswrapper[4869]: I0106 14:41:16.706458 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:41:16 crc kubenswrapper[4869]: E0106 14:41:16.707543 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:41:19 crc kubenswrapper[4869]: I0106 14:41:19.969061 4869 generic.go:334] "Generic (PLEG): container finished" podID="b5f1c551-161d-40cc-a7bb-475eab4b0f98" containerID="0a0c008d6b4acf10f1c308513344bc8eaa815fdce756ac82a1af08f1203fd406" exitCode=0 Jan 06 14:41:19 crc kubenswrapper[4869]: I0106 14:41:19.969189 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" event={"ID":"b5f1c551-161d-40cc-a7bb-475eab4b0f98","Type":"ContainerDied","Data":"0a0c008d6b4acf10f1c308513344bc8eaa815fdce756ac82a1af08f1203fd406"} Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.489735 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585167 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585334 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585392 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585590 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxjxm\" (UniqueName: \"kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585730 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.585909 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0\") pod \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\" (UID: \"b5f1c551-161d-40cc-a7bb-475eab4b0f98\") " Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.621438 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm" (OuterVolumeSpecName: "kube-api-access-lxjxm") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "kube-api-access-lxjxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.635826 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.635876 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.638115 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.641806 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph" (OuterVolumeSpecName: "ceph") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.659955 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory" (OuterVolumeSpecName: "inventory") pod "b5f1c551-161d-40cc-a7bb-475eab4b0f98" (UID: "b5f1c551-161d-40cc-a7bb-475eab4b0f98"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691119 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691178 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691195 4869 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691206 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxjxm\" (UniqueName: \"kubernetes.io/projected/b5f1c551-161d-40cc-a7bb-475eab4b0f98-kube-api-access-lxjxm\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691230 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b5f1c551-161d-40cc-a7bb-475eab4b0f98-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.691242 4869 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b5f1c551-161d-40cc-a7bb-475eab4b0f98-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.989967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" event={"ID":"b5f1c551-161d-40cc-a7bb-475eab4b0f98","Type":"ContainerDied","Data":"5043ffa483dc5c8fc22a9cd7dea9b95b9eff45e1a6600e9ac44b46527517f1c0"} Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.990023 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5043ffa483dc5c8fc22a9cd7dea9b95b9eff45e1a6600e9ac44b46527517f1c0" Jan 06 14:41:21 crc kubenswrapper[4869]: I0106 14:41:21.990032 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d7lg5" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.240766 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz"] Jan 06 14:41:22 crc kubenswrapper[4869]: E0106 14:41:22.241817 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f1c551-161d-40cc-a7bb-475eab4b0f98" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.241851 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f1c551-161d-40cc-a7bb-475eab4b0f98" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.242280 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f1c551-161d-40cc-a7bb-475eab4b0f98" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.243542 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.250175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.250175 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.250597 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.250628 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.251555 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.251741 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.252016 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.255077 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz"] Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.405329 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zgkn\" (UniqueName: \"kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.405878 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.405921 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.406012 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.406041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.406068 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.406102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.510654 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.510775 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.510849 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.510918 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.511040 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zgkn\" (UniqueName: \"kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.511097 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.511157 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.521884 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.522855 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.524421 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.531141 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.532648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.533815 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.546222 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zgkn\" (UniqueName: \"kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:22 crc kubenswrapper[4869]: I0106 14:41:22.570571 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:41:23 crc kubenswrapper[4869]: I0106 14:41:23.245079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz"] Jan 06 14:41:24 crc kubenswrapper[4869]: I0106 14:41:24.029275 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" event={"ID":"37c9408c-3a8b-4246-a7ec-d2e99d49790f","Type":"ContainerStarted","Data":"97a52e6ad568fddce522e72f6e7b7d1f4766c46fab3614829e113a58dd6dcc72"} Jan 06 14:41:25 crc kubenswrapper[4869]: I0106 14:41:25.042387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" event={"ID":"37c9408c-3a8b-4246-a7ec-d2e99d49790f","Type":"ContainerStarted","Data":"faae860029792207d27b770246fd8080aac9ff869a981f45fe66890f48313a64"} Jan 06 14:41:27 crc kubenswrapper[4869]: I0106 14:41:27.707489 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:41:27 crc kubenswrapper[4869]: E0106 14:41:27.709176 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:41:38 crc kubenswrapper[4869]: I0106 14:41:38.704622 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:41:38 crc kubenswrapper[4869]: E0106 14:41:38.705704 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:41:49 crc kubenswrapper[4869]: I0106 14:41:49.705031 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:41:49 crc kubenswrapper[4869]: E0106 14:41:49.706159 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:42:01 crc kubenswrapper[4869]: I0106 14:42:01.722466 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:42:01 crc kubenswrapper[4869]: E0106 14:42:01.724015 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:42:12 crc kubenswrapper[4869]: I0106 14:42:12.704458 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:42:13 crc kubenswrapper[4869]: I0106 14:42:13.645633 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6"} Jan 06 14:42:13 crc kubenswrapper[4869]: I0106 14:42:13.669319 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" podStartSLOduration=50.71824558 podStartE2EDuration="51.669292708s" podCreationTimestamp="2026-01-06 14:41:22 +0000 UTC" firstStartedPulling="2026-01-06 14:41:23.26258923 +0000 UTC m=+2501.802276894" lastFinishedPulling="2026-01-06 14:41:24.213636358 +0000 UTC m=+2502.753324022" observedRunningTime="2026-01-06 14:41:25.091108918 +0000 UTC m=+2503.630796582" watchObservedRunningTime="2026-01-06 14:42:13.669292708 +0000 UTC m=+2552.208980372" Jan 06 14:42:37 crc kubenswrapper[4869]: I0106 14:42:37.932759 4869 generic.go:334] "Generic (PLEG): container finished" podID="37c9408c-3a8b-4246-a7ec-d2e99d49790f" containerID="faae860029792207d27b770246fd8080aac9ff869a981f45fe66890f48313a64" exitCode=0 Jan 06 14:42:37 crc kubenswrapper[4869]: I0106 14:42:37.932848 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" event={"ID":"37c9408c-3a8b-4246-a7ec-d2e99d49790f","Type":"ContainerDied","Data":"faae860029792207d27b770246fd8080aac9ff869a981f45fe66890f48313a64"} Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.827556 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.972759 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.972820 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.972865 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.972889 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zgkn\" (UniqueName: \"kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.972984 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.973075 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.973099 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.986190 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.986903 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn" (OuterVolumeSpecName: "kube-api-access-6zgkn") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "kube-api-access-6zgkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:42:39 crc kubenswrapper[4869]: I0106 14:42:39.990813 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph" (OuterVolumeSpecName: "ceph") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.050863 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory" (OuterVolumeSpecName: "inventory") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.057135 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.057929 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.075212 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.075734 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\" (UID: \"37c9408c-3a8b-4246-a7ec-d2e99d49790f\") " Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076330 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076351 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076363 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zgkn\" (UniqueName: \"kubernetes.io/projected/37c9408c-3a8b-4246-a7ec-d2e99d49790f-kube-api-access-6zgkn\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076375 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076387 4869 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076397 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: W0106 14:42:40.076505 4869 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/37c9408c-3a8b-4246-a7ec-d2e99d49790f/volumes/kubernetes.io~secret/neutron-ovn-metadata-agent-neutron-config-0 Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.076521 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "37c9408c-3a8b-4246-a7ec-d2e99d49790f" (UID: "37c9408c-3a8b-4246-a7ec-d2e99d49790f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.155404 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj"] Jan 06 14:42:40 crc kubenswrapper[4869]: E0106 14:42:40.155905 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c9408c-3a8b-4246-a7ec-d2e99d49790f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.155927 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c9408c-3a8b-4246-a7ec-d2e99d49790f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.156166 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c9408c-3a8b-4246-a7ec-d2e99d49790f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.156941 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.166954 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.175138 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj"] Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.178705 4869 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/37c9408c-3a8b-4246-a7ec-d2e99d49790f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.280370 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.282961 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.283138 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.283385 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.283538 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6cfj\" (UniqueName: \"kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.283585 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.310624 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" event={"ID":"37c9408c-3a8b-4246-a7ec-d2e99d49790f","Type":"ContainerDied","Data":"97a52e6ad568fddce522e72f6e7b7d1f4766c46fab3614829e113a58dd6dcc72"} Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.310699 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97a52e6ad568fddce522e72f6e7b7d1f4766c46fab3614829e113a58dd6dcc72" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.310819 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385016 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385322 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385478 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385605 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6cfj\" (UniqueName: \"kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385760 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.385926 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.390850 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.390858 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.391557 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.392648 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.393508 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.402657 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6cfj\" (UniqueName: \"kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:40 crc kubenswrapper[4869]: I0106 14:42:40.479289 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:42:41 crc kubenswrapper[4869]: I0106 14:42:41.101517 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj"] Jan 06 14:42:41 crc kubenswrapper[4869]: I0106 14:42:41.320645 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" event={"ID":"bb8a6d75-fe0e-4703-b592-39c4ff9241d5","Type":"ContainerStarted","Data":"183e4bd617cda067c891a9a3b098806bbb97a82cdd83b8c34393194b2283b910"} Jan 06 14:42:42 crc kubenswrapper[4869]: I0106 14:42:42.333891 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" event={"ID":"bb8a6d75-fe0e-4703-b592-39c4ff9241d5","Type":"ContainerStarted","Data":"4a2ce13ceea2d0883de4446310bb05285b760f07691abdf39f26a7e60851d685"} Jan 06 14:42:42 crc kubenswrapper[4869]: I0106 14:42:42.365726 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" podStartSLOduration=1.802151362 podStartE2EDuration="2.365697773s" podCreationTimestamp="2026-01-06 14:42:40 +0000 UTC" firstStartedPulling="2026-01-06 14:42:41.112753483 +0000 UTC m=+2579.652441167" lastFinishedPulling="2026-01-06 14:42:41.676299884 +0000 UTC m=+2580.215987578" observedRunningTime="2026-01-06 14:42:42.357090507 +0000 UTC m=+2580.896778271" watchObservedRunningTime="2026-01-06 14:42:42.365697773 +0000 UTC m=+2580.905385477" Jan 06 14:43:55 crc kubenswrapper[4869]: I0106 14:43:55.313348 4869 scope.go:117] "RemoveContainer" containerID="981f29ee4c68af8e471e4697abb4dd6b3d759395957ed625c040201b9b96ea5a" Jan 06 14:43:55 crc kubenswrapper[4869]: I0106 14:43:55.347480 4869 scope.go:117] "RemoveContainer" containerID="03dadd397207cccc1d898820629bace367c47f3ea3824dc4cd67a7702b7409cd" Jan 06 14:43:55 crc kubenswrapper[4869]: I0106 14:43:55.387848 4869 scope.go:117] "RemoveContainer" containerID="6c50034c0efbdeb14186f1c841ac853bca5a5473b7907baed48a1870749bc991" Jan 06 14:44:33 crc kubenswrapper[4869]: I0106 14:44:33.622278 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:44:33 crc kubenswrapper[4869]: I0106 14:44:33.622879 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.185056 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x"] Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.187854 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.190443 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.197143 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.201079 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x"] Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.328188 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p72g\" (UniqueName: \"kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.328394 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.328446 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.431032 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p72g\" (UniqueName: \"kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.431149 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.431174 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.432219 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.443826 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.448566 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p72g\" (UniqueName: \"kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g\") pod \"collect-profiles-29461845-6l89x\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.517081 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:00 crc kubenswrapper[4869]: I0106 14:45:00.985304 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x"] Jan 06 14:45:01 crc kubenswrapper[4869]: I0106 14:45:01.771552 4869 generic.go:334] "Generic (PLEG): container finished" podID="add290bb-cb89-4cbb-83d8-4849b9293400" containerID="c2126e3555621242ed30c30a60d7069e8cae39637ad844995101cc4162aac1b7" exitCode=0 Jan 06 14:45:01 crc kubenswrapper[4869]: I0106 14:45:01.771648 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" event={"ID":"add290bb-cb89-4cbb-83d8-4849b9293400","Type":"ContainerDied","Data":"c2126e3555621242ed30c30a60d7069e8cae39637ad844995101cc4162aac1b7"} Jan 06 14:45:01 crc kubenswrapper[4869]: I0106 14:45:01.772263 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" event={"ID":"add290bb-cb89-4cbb-83d8-4849b9293400","Type":"ContainerStarted","Data":"89e4c735d400452f17ab30da8838240465930ea69d627107bbd91454bc13fb8b"} Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.114174 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.191263 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p72g\" (UniqueName: \"kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g\") pod \"add290bb-cb89-4cbb-83d8-4849b9293400\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.191490 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume\") pod \"add290bb-cb89-4cbb-83d8-4849b9293400\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.191702 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume\") pod \"add290bb-cb89-4cbb-83d8-4849b9293400\" (UID: \"add290bb-cb89-4cbb-83d8-4849b9293400\") " Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.192637 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume" (OuterVolumeSpecName: "config-volume") pod "add290bb-cb89-4cbb-83d8-4849b9293400" (UID: "add290bb-cb89-4cbb-83d8-4849b9293400"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.198467 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "add290bb-cb89-4cbb-83d8-4849b9293400" (UID: "add290bb-cb89-4cbb-83d8-4849b9293400"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.199198 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g" (OuterVolumeSpecName: "kube-api-access-4p72g") pod "add290bb-cb89-4cbb-83d8-4849b9293400" (UID: "add290bb-cb89-4cbb-83d8-4849b9293400"). InnerVolumeSpecName "kube-api-access-4p72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.295964 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add290bb-cb89-4cbb-83d8-4849b9293400-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.296009 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/add290bb-cb89-4cbb-83d8-4849b9293400-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.296021 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p72g\" (UniqueName: \"kubernetes.io/projected/add290bb-cb89-4cbb-83d8-4849b9293400-kube-api-access-4p72g\") on node \"crc\" DevicePath \"\"" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.622347 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.622461 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.799039 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" event={"ID":"add290bb-cb89-4cbb-83d8-4849b9293400","Type":"ContainerDied","Data":"89e4c735d400452f17ab30da8838240465930ea69d627107bbd91454bc13fb8b"} Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.799079 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461845-6l89x" Jan 06 14:45:03 crc kubenswrapper[4869]: I0106 14:45:03.799104 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e4c735d400452f17ab30da8838240465930ea69d627107bbd91454bc13fb8b" Jan 06 14:45:04 crc kubenswrapper[4869]: I0106 14:45:04.226171 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92"] Jan 06 14:45:04 crc kubenswrapper[4869]: I0106 14:45:04.242419 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461800-4xp92"] Jan 06 14:45:05 crc kubenswrapper[4869]: I0106 14:45:05.725865 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f52f78b-eb13-45bc-bf05-d1c138781664" path="/var/lib/kubelet/pods/2f52f78b-eb13-45bc-bf05-d1c138781664/volumes" Jan 06 14:45:33 crc kubenswrapper[4869]: I0106 14:45:33.622058 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:45:33 crc kubenswrapper[4869]: I0106 14:45:33.622629 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:45:33 crc kubenswrapper[4869]: I0106 14:45:33.622705 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:45:33 crc kubenswrapper[4869]: I0106 14:45:33.623327 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:45:33 crc kubenswrapper[4869]: I0106 14:45:33.623376 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6" gracePeriod=600 Jan 06 14:45:34 crc kubenswrapper[4869]: I0106 14:45:34.145468 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6" exitCode=0 Jan 06 14:45:34 crc kubenswrapper[4869]: I0106 14:45:34.145817 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6"} Jan 06 14:45:34 crc kubenswrapper[4869]: I0106 14:45:34.145847 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d"} Jan 06 14:45:34 crc kubenswrapper[4869]: I0106 14:45:34.145865 4869 scope.go:117] "RemoveContainer" containerID="9c58ddbf7542a87af7425f3176f1893cb617468d9e6dec2b9545b08f76a986af" Jan 06 14:45:55 crc kubenswrapper[4869]: I0106 14:45:55.482387 4869 scope.go:117] "RemoveContainer" containerID="ac847b6b32460687045ab4180b0b84142fc56f873cc7b4a8a4f056d9c0660d3b" Jan 06 14:47:33 crc kubenswrapper[4869]: I0106 14:47:33.622047 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:47:33 crc kubenswrapper[4869]: I0106 14:47:33.622606 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:47:40 crc kubenswrapper[4869]: I0106 14:47:40.407370 4869 generic.go:334] "Generic (PLEG): container finished" podID="bb8a6d75-fe0e-4703-b592-39c4ff9241d5" containerID="4a2ce13ceea2d0883de4446310bb05285b760f07691abdf39f26a7e60851d685" exitCode=0 Jan 06 14:47:40 crc kubenswrapper[4869]: I0106 14:47:40.407509 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" event={"ID":"bb8a6d75-fe0e-4703-b592-39c4ff9241d5","Type":"ContainerDied","Data":"4a2ce13ceea2d0883de4446310bb05285b760f07691abdf39f26a7e60851d685"} Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.884596 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.967748 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cfj\" (UniqueName: \"kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.968125 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.968272 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.968461 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.968613 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.968738 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0\") pod \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\" (UID: \"bb8a6d75-fe0e-4703-b592-39c4ff9241d5\") " Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.973993 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph" (OuterVolumeSpecName: "ceph") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.974387 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:47:41 crc kubenswrapper[4869]: I0106 14:47:41.975628 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj" (OuterVolumeSpecName: "kube-api-access-g6cfj") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "kube-api-access-g6cfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.002993 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory" (OuterVolumeSpecName: "inventory") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.005079 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.005299 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bb8a6d75-fe0e-4703-b592-39c4ff9241d5" (UID: "bb8a6d75-fe0e-4703-b592-39c4ff9241d5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071258 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071577 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071590 4869 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071599 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6cfj\" (UniqueName: \"kubernetes.io/projected/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-kube-api-access-g6cfj\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071608 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.071618 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bb8a6d75-fe0e-4703-b592-39c4ff9241d5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.432161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" event={"ID":"bb8a6d75-fe0e-4703-b592-39c4ff9241d5","Type":"ContainerDied","Data":"183e4bd617cda067c891a9a3b098806bbb97a82cdd83b8c34393194b2283b910"} Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.432231 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.432265 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183e4bd617cda067c891a9a3b098806bbb97a82cdd83b8c34393194b2283b910" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.630495 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj"] Jan 06 14:47:42 crc kubenswrapper[4869]: E0106 14:47:42.631080 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add290bb-cb89-4cbb-83d8-4849b9293400" containerName="collect-profiles" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.631190 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="add290bb-cb89-4cbb-83d8-4849b9293400" containerName="collect-profiles" Jan 06 14:47:42 crc kubenswrapper[4869]: E0106 14:47:42.631277 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8a6d75-fe0e-4703-b592-39c4ff9241d5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.631333 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8a6d75-fe0e-4703-b592-39c4ff9241d5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.631630 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="add290bb-cb89-4cbb-83d8-4849b9293400" containerName="collect-profiles" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.632827 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8a6d75-fe0e-4703-b592-39c4ff9241d5" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.633496 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.638448 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.638557 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.638903 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.640097 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.640586 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.640650 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.640604 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.640625 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.641028 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.650511 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj"] Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.686996 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687360 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687483 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687614 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687720 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687823 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687906 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.687995 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.688113 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8vc8\" (UniqueName: \"kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.688232 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.688330 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790041 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8vc8\" (UniqueName: \"kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790448 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790572 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790798 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.790890 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.791003 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.791121 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.791199 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.791247 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.791301 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.793841 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794186 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794271 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794341 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794483 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794487 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794718 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.794899 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.801575 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.802776 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.804650 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.805256 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.805890 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.805919 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.807028 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.810152 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.812145 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.827279 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8vc8\" (UniqueName: \"kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.950415 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-qbvg5" Jan 06 14:47:42 crc kubenswrapper[4869]: I0106 14:47:42.959236 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:47:43 crc kubenswrapper[4869]: I0106 14:47:43.556115 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj"] Jan 06 14:47:43 crc kubenswrapper[4869]: I0106 14:47:43.568883 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:47:44 crc kubenswrapper[4869]: I0106 14:47:44.114331 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 06 14:47:44 crc kubenswrapper[4869]: I0106 14:47:44.448325 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" event={"ID":"8833c85f-4713-4005-a7ad-e3446d62c1cf","Type":"ContainerStarted","Data":"97319674790f8c1019b723619f5dafe603afa07e2bf8b86e580f7aba42d4d341"} Jan 06 14:47:44 crc kubenswrapper[4869]: I0106 14:47:44.448623 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" event={"ID":"8833c85f-4713-4005-a7ad-e3446d62c1cf","Type":"ContainerStarted","Data":"12cb91bb754ef4ca28881e75b7c9998a5b57caf77a8712b5cbd1370cf4a72f84"} Jan 06 14:47:44 crc kubenswrapper[4869]: I0106 14:47:44.477829 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" podStartSLOduration=1.934730081 podStartE2EDuration="2.477797252s" podCreationTimestamp="2026-01-06 14:47:42 +0000 UTC" firstStartedPulling="2026-01-06 14:47:43.568509744 +0000 UTC m=+2882.108197428" lastFinishedPulling="2026-01-06 14:47:44.111576905 +0000 UTC m=+2882.651264599" observedRunningTime="2026-01-06 14:47:44.46757536 +0000 UTC m=+2883.007263034" watchObservedRunningTime="2026-01-06 14:47:44.477797252 +0000 UTC m=+2883.017484946" Jan 06 14:48:03 crc kubenswrapper[4869]: I0106 14:48:03.621896 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:48:03 crc kubenswrapper[4869]: I0106 14:48:03.622468 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.843761 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.847548 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.855721 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.950304 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.950517 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmxm\" (UniqueName: \"kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:13 crc kubenswrapper[4869]: I0106 14:48:13.950548 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.051998 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.052205 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmxm\" (UniqueName: \"kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.052236 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.052818 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.053109 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.073024 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmxm\" (UniqueName: \"kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm\") pod \"redhat-operators-mcpqh\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.189416 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.640766 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:14 crc kubenswrapper[4869]: I0106 14:48:14.767015 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerStarted","Data":"7edd6d961f5537a412da238b4bb27b8193532aca148ad585cb9378cb14045c7c"} Jan 06 14:48:15 crc kubenswrapper[4869]: I0106 14:48:15.777896 4869 generic.go:334] "Generic (PLEG): container finished" podID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerID="d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f" exitCode=0 Jan 06 14:48:15 crc kubenswrapper[4869]: I0106 14:48:15.777957 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerDied","Data":"d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f"} Jan 06 14:48:16 crc kubenswrapper[4869]: I0106 14:48:16.796138 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerStarted","Data":"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d"} Jan 06 14:48:17 crc kubenswrapper[4869]: I0106 14:48:17.811403 4869 generic.go:334] "Generic (PLEG): container finished" podID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerID="bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d" exitCode=0 Jan 06 14:48:17 crc kubenswrapper[4869]: I0106 14:48:17.811555 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerDied","Data":"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d"} Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.421023 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.424242 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.453769 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.552391 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.552473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.552544 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzs7\" (UniqueName: \"kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.654175 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.654370 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.654513 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzs7\" (UniqueName: \"kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.654735 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.655018 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.681360 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzs7\" (UniqueName: \"kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7\") pod \"certified-operators-tgp5m\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:18 crc kubenswrapper[4869]: I0106 14:48:18.760297 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.218447 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:19 crc kubenswrapper[4869]: W0106 14:48:19.226442 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f05f65_85af_4be3_92cc_e9fa5b317a73.slice/crio-494459cf3ab953b68b260b4b835b50c71a94a05b8b03e6609e7a883525c290f8 WatchSource:0}: Error finding container 494459cf3ab953b68b260b4b835b50c71a94a05b8b03e6609e7a883525c290f8: Status 404 returned error can't find the container with id 494459cf3ab953b68b260b4b835b50c71a94a05b8b03e6609e7a883525c290f8 Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.833856 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerID="550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a" exitCode=0 Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.834081 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerDied","Data":"550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a"} Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.834279 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerStarted","Data":"494459cf3ab953b68b260b4b835b50c71a94a05b8b03e6609e7a883525c290f8"} Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.837811 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerStarted","Data":"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93"} Jan 06 14:48:19 crc kubenswrapper[4869]: I0106 14:48:19.878441 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mcpqh" podStartSLOduration=3.763168565 podStartE2EDuration="6.878414876s" podCreationTimestamp="2026-01-06 14:48:13 +0000 UTC" firstStartedPulling="2026-01-06 14:48:15.781470784 +0000 UTC m=+2914.321158458" lastFinishedPulling="2026-01-06 14:48:18.896717105 +0000 UTC m=+2917.436404769" observedRunningTime="2026-01-06 14:48:19.870532292 +0000 UTC m=+2918.410219956" watchObservedRunningTime="2026-01-06 14:48:19.878414876 +0000 UTC m=+2918.418102570" Jan 06 14:48:21 crc kubenswrapper[4869]: I0106 14:48:21.865568 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerID="e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7" exitCode=0 Jan 06 14:48:21 crc kubenswrapper[4869]: I0106 14:48:21.865636 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerDied","Data":"e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7"} Jan 06 14:48:23 crc kubenswrapper[4869]: I0106 14:48:23.886997 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerStarted","Data":"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1"} Jan 06 14:48:23 crc kubenswrapper[4869]: I0106 14:48:23.914098 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgp5m" podStartSLOduration=2.88972323 podStartE2EDuration="5.914074995s" podCreationTimestamp="2026-01-06 14:48:18 +0000 UTC" firstStartedPulling="2026-01-06 14:48:19.836656877 +0000 UTC m=+2918.376344551" lastFinishedPulling="2026-01-06 14:48:22.861008652 +0000 UTC m=+2921.400696316" observedRunningTime="2026-01-06 14:48:23.905816346 +0000 UTC m=+2922.445504010" watchObservedRunningTime="2026-01-06 14:48:23.914074995 +0000 UTC m=+2922.453762659" Jan 06 14:48:24 crc kubenswrapper[4869]: I0106 14:48:24.189647 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:24 crc kubenswrapper[4869]: I0106 14:48:24.190298 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:25 crc kubenswrapper[4869]: I0106 14:48:25.251222 4869 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mcpqh" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="registry-server" probeResult="failure" output=< Jan 06 14:48:25 crc kubenswrapper[4869]: timeout: failed to connect service ":50051" within 1s Jan 06 14:48:25 crc kubenswrapper[4869]: > Jan 06 14:48:28 crc kubenswrapper[4869]: I0106 14:48:28.760933 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:28 crc kubenswrapper[4869]: I0106 14:48:28.761008 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:28 crc kubenswrapper[4869]: I0106 14:48:28.836152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:29 crc kubenswrapper[4869]: I0106 14:48:29.005718 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:29 crc kubenswrapper[4869]: I0106 14:48:29.078703 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:30 crc kubenswrapper[4869]: I0106 14:48:30.960648 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgp5m" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="registry-server" containerID="cri-o://4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1" gracePeriod=2 Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.415764 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.529867 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities\") pod \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.529968 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content\") pod \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.530138 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzs7\" (UniqueName: \"kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7\") pod \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\" (UID: \"c1f05f65-85af-4be3-92cc-e9fa5b317a73\") " Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.530711 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities" (OuterVolumeSpecName: "utilities") pod "c1f05f65-85af-4be3-92cc-e9fa5b317a73" (UID: "c1f05f65-85af-4be3-92cc-e9fa5b317a73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.538494 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7" (OuterVolumeSpecName: "kube-api-access-7gzs7") pod "c1f05f65-85af-4be3-92cc-e9fa5b317a73" (UID: "c1f05f65-85af-4be3-92cc-e9fa5b317a73"). InnerVolumeSpecName "kube-api-access-7gzs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.594458 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1f05f65-85af-4be3-92cc-e9fa5b317a73" (UID: "c1f05f65-85af-4be3-92cc-e9fa5b317a73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.633452 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.633508 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1f05f65-85af-4be3-92cc-e9fa5b317a73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.633530 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzs7\" (UniqueName: \"kubernetes.io/projected/c1f05f65-85af-4be3-92cc-e9fa5b317a73-kube-api-access-7gzs7\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.972210 4869 generic.go:334] "Generic (PLEG): container finished" podID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerID="4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1" exitCode=0 Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.972267 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerDied","Data":"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1"} Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.972304 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgp5m" event={"ID":"c1f05f65-85af-4be3-92cc-e9fa5b317a73","Type":"ContainerDied","Data":"494459cf3ab953b68b260b4b835b50c71a94a05b8b03e6609e7a883525c290f8"} Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.972326 4869 scope.go:117] "RemoveContainer" containerID="4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1" Jan 06 14:48:31 crc kubenswrapper[4869]: I0106 14:48:31.974378 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgp5m" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.009354 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.012498 4869 scope.go:117] "RemoveContainer" containerID="e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.018634 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgp5m"] Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.047535 4869 scope.go:117] "RemoveContainer" containerID="550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.099603 4869 scope.go:117] "RemoveContainer" containerID="4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1" Jan 06 14:48:32 crc kubenswrapper[4869]: E0106 14:48:32.100248 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1\": container with ID starting with 4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1 not found: ID does not exist" containerID="4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.100285 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1"} err="failed to get container status \"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1\": rpc error: code = NotFound desc = could not find container \"4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1\": container with ID starting with 4d5f3b0a266f876e2abfe857a763c92f225a5c59a553736271cd5d52998008d1 not found: ID does not exist" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.100305 4869 scope.go:117] "RemoveContainer" containerID="e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7" Jan 06 14:48:32 crc kubenswrapper[4869]: E0106 14:48:32.100629 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7\": container with ID starting with e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7 not found: ID does not exist" containerID="e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.100650 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7"} err="failed to get container status \"e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7\": rpc error: code = NotFound desc = could not find container \"e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7\": container with ID starting with e527bf4a2eda5d874c11155dc1c23f3107b70a8978cbdd3dbc73ec9693b0d2c7 not found: ID does not exist" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.100676 4869 scope.go:117] "RemoveContainer" containerID="550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a" Jan 06 14:48:32 crc kubenswrapper[4869]: E0106 14:48:32.101045 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a\": container with ID starting with 550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a not found: ID does not exist" containerID="550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a" Jan 06 14:48:32 crc kubenswrapper[4869]: I0106 14:48:32.101083 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a"} err="failed to get container status \"550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a\": rpc error: code = NotFound desc = could not find container \"550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a\": container with ID starting with 550ca8890d1121439ed29a672204227921d03e9e6c93ba2173aa9c55e7ce502a not found: ID does not exist" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.491429 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:33 crc kubenswrapper[4869]: E0106 14:48:33.493625 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="extract-content" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.493803 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="extract-content" Jan 06 14:48:33 crc kubenswrapper[4869]: E0106 14:48:33.493890 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="registry-server" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.494021 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="registry-server" Jan 06 14:48:33 crc kubenswrapper[4869]: E0106 14:48:33.494174 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="extract-utilities" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.494240 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="extract-utilities" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.494819 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" containerName="registry-server" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.498530 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.519881 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.573176 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.573237 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsx5\" (UniqueName: \"kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.573262 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.622350 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.622435 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.622514 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.623583 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.623647 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" gracePeriod=600 Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.674644 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.674720 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsx5\" (UniqueName: \"kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.674752 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.675234 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.675276 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.696354 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsx5\" (UniqueName: \"kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5\") pod \"community-operators-q6b85\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.719331 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f05f65-85af-4be3-92cc-e9fa5b317a73" path="/var/lib/kubelet/pods/c1f05f65-85af-4be3-92cc-e9fa5b317a73/volumes" Jan 06 14:48:33 crc kubenswrapper[4869]: E0106 14:48:33.742118 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:48:33 crc kubenswrapper[4869]: I0106 14:48:33.863234 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.022339 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" exitCode=0 Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.022392 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d"} Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.022432 4869 scope.go:117] "RemoveContainer" containerID="2569f0867fe8fe621684413b395cccbe3394585df22f96bbc1a3cd7b50aaafc6" Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.023296 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:48:34 crc kubenswrapper[4869]: E0106 14:48:34.023711 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.238415 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.282768 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:34 crc kubenswrapper[4869]: W0106 14:48:34.370485 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e840ad0_7357_46da_b399_4050542e8495.slice/crio-56864a3b0f393083c803175236da54c11468bb1b473b6bdee5e8db9b35169f29 WatchSource:0}: Error finding container 56864a3b0f393083c803175236da54c11468bb1b473b6bdee5e8db9b35169f29: Status 404 returned error can't find the container with id 56864a3b0f393083c803175236da54c11468bb1b473b6bdee5e8db9b35169f29 Jan 06 14:48:34 crc kubenswrapper[4869]: I0106 14:48:34.370756 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:35 crc kubenswrapper[4869]: I0106 14:48:35.033779 4869 generic.go:334] "Generic (PLEG): container finished" podID="8e840ad0-7357-46da-b399-4050542e8495" containerID="270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf" exitCode=0 Jan 06 14:48:35 crc kubenswrapper[4869]: I0106 14:48:35.033843 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerDied","Data":"270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf"} Jan 06 14:48:35 crc kubenswrapper[4869]: I0106 14:48:35.033927 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerStarted","Data":"56864a3b0f393083c803175236da54c11468bb1b473b6bdee5e8db9b35169f29"} Jan 06 14:48:36 crc kubenswrapper[4869]: I0106 14:48:36.047404 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerStarted","Data":"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4"} Jan 06 14:48:36 crc kubenswrapper[4869]: I0106 14:48:36.483618 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:36 crc kubenswrapper[4869]: I0106 14:48:36.483897 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mcpqh" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="registry-server" containerID="cri-o://f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93" gracePeriod=2 Jan 06 14:48:36 crc kubenswrapper[4869]: I0106 14:48:36.985560 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.057886 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities\") pod \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.058019 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmxm\" (UniqueName: \"kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm\") pod \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.058065 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content\") pod \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\" (UID: \"04b3de0f-7676-4f43-84f1-bec0f6609d2f\") " Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.060176 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities" (OuterVolumeSpecName: "utilities") pod "04b3de0f-7676-4f43-84f1-bec0f6609d2f" (UID: "04b3de0f-7676-4f43-84f1-bec0f6609d2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.060296 4869 generic.go:334] "Generic (PLEG): container finished" podID="8e840ad0-7357-46da-b399-4050542e8495" containerID="156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4" exitCode=0 Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.060389 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerDied","Data":"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4"} Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.061379 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.064394 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm" (OuterVolumeSpecName: "kube-api-access-7kmxm") pod "04b3de0f-7676-4f43-84f1-bec0f6609d2f" (UID: "04b3de0f-7676-4f43-84f1-bec0f6609d2f"). InnerVolumeSpecName "kube-api-access-7kmxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.068939 4869 generic.go:334] "Generic (PLEG): container finished" podID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerID="f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93" exitCode=0 Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.068986 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerDied","Data":"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93"} Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.069020 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mcpqh" event={"ID":"04b3de0f-7676-4f43-84f1-bec0f6609d2f","Type":"ContainerDied","Data":"7edd6d961f5537a412da238b4bb27b8193532aca148ad585cb9378cb14045c7c"} Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.069043 4869 scope.go:117] "RemoveContainer" containerID="f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.069148 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mcpqh" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.140305 4869 scope.go:117] "RemoveContainer" containerID="bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.165745 4869 scope.go:117] "RemoveContainer" containerID="d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.167986 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmxm\" (UniqueName: \"kubernetes.io/projected/04b3de0f-7676-4f43-84f1-bec0f6609d2f-kube-api-access-7kmxm\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.210148 4869 scope.go:117] "RemoveContainer" containerID="f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93" Jan 06 14:48:37 crc kubenswrapper[4869]: E0106 14:48:37.211213 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93\": container with ID starting with f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93 not found: ID does not exist" containerID="f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.211284 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93"} err="failed to get container status \"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93\": rpc error: code = NotFound desc = could not find container \"f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93\": container with ID starting with f5e4c86b331136abbc52bc78d2bcc722ec839b72c211a2903bbb71455752fe93 not found: ID does not exist" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.211335 4869 scope.go:117] "RemoveContainer" containerID="bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d" Jan 06 14:48:37 crc kubenswrapper[4869]: E0106 14:48:37.212032 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d\": container with ID starting with bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d not found: ID does not exist" containerID="bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.212091 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d"} err="failed to get container status \"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d\": rpc error: code = NotFound desc = could not find container \"bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d\": container with ID starting with bfea18aa14766dd4e84f370b95b7ea55bef7fa04c00fa6748dae297036523a9d not found: ID does not exist" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.212119 4869 scope.go:117] "RemoveContainer" containerID="d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f" Jan 06 14:48:37 crc kubenswrapper[4869]: E0106 14:48:37.212495 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f\": container with ID starting with d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f not found: ID does not exist" containerID="d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.212533 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f"} err="failed to get container status \"d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f\": rpc error: code = NotFound desc = could not find container \"d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f\": container with ID starting with d02ff0ea8271afa69a3870ee4647d345ae23e5caca6358416920194a9dfca83f not found: ID does not exist" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.220543 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04b3de0f-7676-4f43-84f1-bec0f6609d2f" (UID: "04b3de0f-7676-4f43-84f1-bec0f6609d2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.270267 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04b3de0f-7676-4f43-84f1-bec0f6609d2f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.424817 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.440652 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mcpqh"] Jan 06 14:48:37 crc kubenswrapper[4869]: I0106 14:48:37.734935 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" path="/var/lib/kubelet/pods/04b3de0f-7676-4f43-84f1-bec0f6609d2f/volumes" Jan 06 14:48:38 crc kubenswrapper[4869]: I0106 14:48:38.088763 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerStarted","Data":"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79"} Jan 06 14:48:38 crc kubenswrapper[4869]: I0106 14:48:38.109488 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6b85" podStartSLOduration=2.6179481190000002 podStartE2EDuration="5.109462929s" podCreationTimestamp="2026-01-06 14:48:33 +0000 UTC" firstStartedPulling="2026-01-06 14:48:35.039102577 +0000 UTC m=+2933.578790271" lastFinishedPulling="2026-01-06 14:48:37.530617377 +0000 UTC m=+2936.070305081" observedRunningTime="2026-01-06 14:48:38.107078001 +0000 UTC m=+2936.646765665" watchObservedRunningTime="2026-01-06 14:48:38.109462929 +0000 UTC m=+2936.649150593" Jan 06 14:48:43 crc kubenswrapper[4869]: I0106 14:48:43.863569 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:43 crc kubenswrapper[4869]: I0106 14:48:43.865426 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:43 crc kubenswrapper[4869]: I0106 14:48:43.921365 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:44 crc kubenswrapper[4869]: I0106 14:48:44.213893 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:45 crc kubenswrapper[4869]: I0106 14:48:45.045062 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.170766 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6b85" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="registry-server" containerID="cri-o://ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79" gracePeriod=2 Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.632325 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.667892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities\") pod \"8e840ad0-7357-46da-b399-4050542e8495\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.668081 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content\") pod \"8e840ad0-7357-46da-b399-4050542e8495\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.668111 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwsx5\" (UniqueName: \"kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5\") pod \"8e840ad0-7357-46da-b399-4050542e8495\" (UID: \"8e840ad0-7357-46da-b399-4050542e8495\") " Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.669175 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities" (OuterVolumeSpecName: "utilities") pod "8e840ad0-7357-46da-b399-4050542e8495" (UID: "8e840ad0-7357-46da-b399-4050542e8495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.674622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5" (OuterVolumeSpecName: "kube-api-access-gwsx5") pod "8e840ad0-7357-46da-b399-4050542e8495" (UID: "8e840ad0-7357-46da-b399-4050542e8495"). InnerVolumeSpecName "kube-api-access-gwsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.733899 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e840ad0-7357-46da-b399-4050542e8495" (UID: "8e840ad0-7357-46da-b399-4050542e8495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.770221 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.770257 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwsx5\" (UniqueName: \"kubernetes.io/projected/8e840ad0-7357-46da-b399-4050542e8495-kube-api-access-gwsx5\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:46 crc kubenswrapper[4869]: I0106 14:48:46.770269 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e840ad0-7357-46da-b399-4050542e8495-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.182959 4869 generic.go:334] "Generic (PLEG): container finished" podID="8e840ad0-7357-46da-b399-4050542e8495" containerID="ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79" exitCode=0 Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.183022 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerDied","Data":"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79"} Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.183035 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6b85" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.183093 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6b85" event={"ID":"8e840ad0-7357-46da-b399-4050542e8495","Type":"ContainerDied","Data":"56864a3b0f393083c803175236da54c11468bb1b473b6bdee5e8db9b35169f29"} Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.183118 4869 scope.go:117] "RemoveContainer" containerID="ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.203623 4869 scope.go:117] "RemoveContainer" containerID="156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.213683 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.222385 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6b85"] Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.239363 4869 scope.go:117] "RemoveContainer" containerID="270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.263222 4869 scope.go:117] "RemoveContainer" containerID="ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79" Jan 06 14:48:47 crc kubenswrapper[4869]: E0106 14:48:47.263719 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79\": container with ID starting with ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79 not found: ID does not exist" containerID="ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.263770 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79"} err="failed to get container status \"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79\": rpc error: code = NotFound desc = could not find container \"ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79\": container with ID starting with ea6dd1cc899020d5120c8cb4c2c0b9502e743b30432c39049d4e605fc7db9a79 not found: ID does not exist" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.263796 4869 scope.go:117] "RemoveContainer" containerID="156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4" Jan 06 14:48:47 crc kubenswrapper[4869]: E0106 14:48:47.264227 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4\": container with ID starting with 156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4 not found: ID does not exist" containerID="156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.264252 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4"} err="failed to get container status \"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4\": rpc error: code = NotFound desc = could not find container \"156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4\": container with ID starting with 156c8ccb52fb6ba76ba9fcf4386cf542943844ce4490f83c94844d1264d0fde4 not found: ID does not exist" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.264267 4869 scope.go:117] "RemoveContainer" containerID="270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf" Jan 06 14:48:47 crc kubenswrapper[4869]: E0106 14:48:47.264518 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf\": container with ID starting with 270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf not found: ID does not exist" containerID="270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.264549 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf"} err="failed to get container status \"270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf\": rpc error: code = NotFound desc = could not find container \"270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf\": container with ID starting with 270ae5d5bcbe5cf2f676c3ac1c04f416bedc63e9304aec77963f01e109639edf not found: ID does not exist" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.704263 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:48:47 crc kubenswrapper[4869]: E0106 14:48:47.704550 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:48:47 crc kubenswrapper[4869]: I0106 14:48:47.721433 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e840ad0-7357-46da-b399-4050542e8495" path="/var/lib/kubelet/pods/8e840ad0-7357-46da-b399-4050542e8495/volumes" Jan 06 14:48:59 crc kubenswrapper[4869]: I0106 14:48:59.705038 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:48:59 crc kubenswrapper[4869]: E0106 14:48:59.705836 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:49:11 crc kubenswrapper[4869]: I0106 14:49:11.719869 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:49:11 crc kubenswrapper[4869]: E0106 14:49:11.721303 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:49:24 crc kubenswrapper[4869]: I0106 14:49:24.705035 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:49:24 crc kubenswrapper[4869]: E0106 14:49:24.705733 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:49:37 crc kubenswrapper[4869]: I0106 14:49:37.704140 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:49:37 crc kubenswrapper[4869]: E0106 14:49:37.705076 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:49:52 crc kubenswrapper[4869]: I0106 14:49:52.703984 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:49:52 crc kubenswrapper[4869]: E0106 14:49:52.704633 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.518322 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520369 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="extract-content" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520390 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="extract-content" Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520408 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520416 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520438 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="extract-content" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520451 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="extract-content" Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520463 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="extract-utilities" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520471 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="extract-utilities" Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520484 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520492 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: E0106 14:50:04.520506 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="extract-utilities" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520513 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="extract-utilities" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520755 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b3de0f-7676-4f43-84f1-bec0f6609d2f" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.520784 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e840ad0-7357-46da-b399-4050542e8495" containerName="registry-server" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.522462 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.533830 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.638533 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.638635 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqpws\" (UniqueName: \"kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.638696 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.740138 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.740279 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqpws\" (UniqueName: \"kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.740313 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.740957 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.741022 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.764789 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqpws\" (UniqueName: \"kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws\") pod \"redhat-marketplace-8lptv\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:04 crc kubenswrapper[4869]: I0106 14:50:04.856853 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:50:05 crc kubenswrapper[4869]: I0106 14:50:05.387214 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:50:05 crc kubenswrapper[4869]: I0106 14:50:05.918710 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerID="d93e2a308cc78a9355e3a58e4a6f023af4f7b4d00bba518711323dc31731900b" exitCode=0 Jan 06 14:50:05 crc kubenswrapper[4869]: I0106 14:50:05.918752 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerDied","Data":"d93e2a308cc78a9355e3a58e4a6f023af4f7b4d00bba518711323dc31731900b"} Jan 06 14:50:05 crc kubenswrapper[4869]: I0106 14:50:05.918779 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerStarted","Data":"7611058e4183f6195c4c29c451fa6c2a4ca509677548753fdfa5f78e9bfd452c"} Jan 06 14:50:06 crc kubenswrapper[4869]: I0106 14:50:06.704459 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:50:06 crc kubenswrapper[4869]: E0106 14:50:06.704973 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:50:10 crc kubenswrapper[4869]: I0106 14:50:10.972801 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerID="b5d2d1f07ff21cd3425fd7c74f83ac7c3c5288c3573c6dba1fb4efe2fde2960b" exitCode=0 Jan 06 14:50:10 crc kubenswrapper[4869]: I0106 14:50:10.972898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerDied","Data":"b5d2d1f07ff21cd3425fd7c74f83ac7c3c5288c3573c6dba1fb4efe2fde2960b"} Jan 06 14:50:19 crc kubenswrapper[4869]: I0106 14:50:19.704970 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:50:19 crc kubenswrapper[4869]: E0106 14:50:19.705921 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:50:21 crc kubenswrapper[4869]: I0106 14:50:21.767843 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 06 14:50:26 crc kubenswrapper[4869]: I0106 14:50:26.765449 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 06 14:50:30 crc kubenswrapper[4869]: I0106 14:50:30.705335 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:50:30 crc kubenswrapper[4869]: E0106 14:50:30.706187 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:50:31 crc kubenswrapper[4869]: I0106 14:50:31.768947 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 06 14:50:31 crc kubenswrapper[4869]: I0106 14:50:31.769064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 06 14:50:31 crc kubenswrapper[4869]: I0106 14:50:31.770201 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"ba02e9514dd46eb3eaf5b645a5501dbdde41760ced908ad98f726d647b9072fe"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 06 14:50:31 crc kubenswrapper[4869]: I0106 14:50:31.770359 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-central-agent" containerID="cri-o://ba02e9514dd46eb3eaf5b645a5501dbdde41760ced908ad98f726d647b9072fe" gracePeriod=30 Jan 06 14:50:42 crc kubenswrapper[4869]: I0106 14:50:42.704051 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:50:42 crc kubenswrapper[4869]: E0106 14:50:42.704773 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:50:46 crc kubenswrapper[4869]: I0106 14:50:46.766464 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 06 14:50:53 crc kubenswrapper[4869]: I0106 14:50:53.714763 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:50:53 crc kubenswrapper[4869]: E0106 14:50:53.715874 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:51:02 crc kubenswrapper[4869]: I0106 14:51:02.741328 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerID="ba02e9514dd46eb3eaf5b645a5501dbdde41760ced908ad98f726d647b9072fe" exitCode=0 Jan 06 14:51:02 crc kubenswrapper[4869]: I0106 14:51:02.741495 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerDied","Data":"ba02e9514dd46eb3eaf5b645a5501dbdde41760ced908ad98f726d647b9072fe"} Jan 06 14:51:03 crc kubenswrapper[4869]: I0106 14:51:03.754518 4869 generic.go:334] "Generic (PLEG): container finished" podID="8833c85f-4713-4005-a7ad-e3446d62c1cf" containerID="97319674790f8c1019b723619f5dafe603afa07e2bf8b86e580f7aba42d4d341" exitCode=0 Jan 06 14:51:03 crc kubenswrapper[4869]: I0106 14:51:03.754654 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" event={"ID":"8833c85f-4713-4005-a7ad-e3446d62c1cf","Type":"ContainerDied","Data":"97319674790f8c1019b723619f5dafe603afa07e2bf8b86e580f7aba42d4d341"} Jan 06 14:51:08 crc kubenswrapper[4869]: I0106 14:51:08.705439 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:51:08 crc kubenswrapper[4869]: E0106 14:51:08.706390 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.954653 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.982690 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.982806 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.982892 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.983034 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.983068 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8vc8\" (UniqueName: \"kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.983602 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.984160 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.984217 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.984246 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.984313 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.984383 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam\") pod \"8833c85f-4713-4005-a7ad-e3446d62c1cf\" (UID: \"8833c85f-4713-4005-a7ad-e3446d62c1cf\") " Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.990648 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.990675 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8" (OuterVolumeSpecName: "kube-api-access-n8vc8") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "kube-api-access-n8vc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:51:09 crc kubenswrapper[4869]: I0106 14:51:09.998622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph" (OuterVolumeSpecName: "ceph") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.010593 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.014799 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.015910 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.022556 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory" (OuterVolumeSpecName: "inventory") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.027764 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.032356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.034971 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.038784 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8833c85f-4713-4005-a7ad-e3446d62c1cf" (UID: "8833c85f-4713-4005-a7ad-e3446d62c1cf"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087000 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087044 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8vc8\" (UniqueName: \"kubernetes.io/projected/8833c85f-4713-4005-a7ad-e3446d62c1cf-kube-api-access-n8vc8\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087056 4869 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087070 4869 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087083 4869 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087097 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087107 4869 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087120 4869 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087130 4869 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-inventory\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087140 4869 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.087151 4869 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8833c85f-4713-4005-a7ad-e3446d62c1cf-ceph\") on node \"crc\" DevicePath \"\"" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.839198 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" event={"ID":"8833c85f-4713-4005-a7ad-e3446d62c1cf","Type":"ContainerDied","Data":"12cb91bb754ef4ca28881e75b7c9998a5b57caf77a8712b5cbd1370cf4a72f84"} Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.839427 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12cb91bb754ef4ca28881e75b7c9998a5b57caf77a8712b5cbd1370cf4a72f84" Jan 06 14:51:10 crc kubenswrapper[4869]: I0106 14:51:10.839444 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj" Jan 06 14:51:11 crc kubenswrapper[4869]: I0106 14:51:11.124860 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-notification-agent" probeResult="failure" output=< Jan 06 14:51:11 crc kubenswrapper[4869]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 06 14:51:11 crc kubenswrapper[4869]: > Jan 06 14:51:12 crc kubenswrapper[4869]: I0106 14:51:12.860219 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerStarted","Data":"d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14"} Jan 06 14:51:14 crc kubenswrapper[4869]: I0106 14:51:14.857475 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:51:14 crc kubenswrapper[4869]: I0106 14:51:14.858000 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:51:14 crc kubenswrapper[4869]: I0106 14:51:14.921646 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:51:14 crc kubenswrapper[4869]: I0106 14:51:14.958368 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8lptv" podStartSLOduration=5.242549968 podStartE2EDuration="1m10.958349928s" podCreationTimestamp="2026-01-06 14:50:04 +0000 UTC" firstStartedPulling="2026-01-06 14:50:05.920326633 +0000 UTC m=+3024.460014297" lastFinishedPulling="2026-01-06 14:51:11.636126593 +0000 UTC m=+3090.175814257" observedRunningTime="2026-01-06 14:51:13.904193598 +0000 UTC m=+3092.443881262" watchObservedRunningTime="2026-01-06 14:51:14.958349928 +0000 UTC m=+3093.498037592" Jan 06 14:51:19 crc kubenswrapper[4869]: I0106 14:51:19.705151 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:51:19 crc kubenswrapper[4869]: E0106 14:51:19.705920 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:51:24 crc kubenswrapper[4869]: I0106 14:51:24.909691 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:51:31 crc kubenswrapper[4869]: I0106 14:51:31.705510 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:51:31 crc kubenswrapper[4869]: E0106 14:51:31.706386 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:51:33 crc kubenswrapper[4869]: E0106 14:51:33.824481 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Jan 06 14:51:34 crc kubenswrapper[4869]: I0106 14:51:34.722400 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 14:51:34 crc kubenswrapper[4869]: I0106 14:51:34.722477 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:35 crc kubenswrapper[4869]: I0106 14:51:35.691855 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="556f7f3f-b9e0-4e69-a659-5ef5d052a7b4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.181:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:36 crc kubenswrapper[4869]: E0106 14:51:36.929982 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:51:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:51:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:51:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-06T14:51:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:39 crc kubenswrapper[4869]: I0106 14:51:39.721151 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 14:51:39 crc kubenswrapper[4869]: I0106 14:51:39.722249 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.139413 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-notification-agent" probeResult="failure" output=< Jan 06 14:51:41 crc kubenswrapper[4869]: Unkown error: Expecting value: line 1 column 1 (char 0) Jan 06 14:51:41 crc kubenswrapper[4869]: > Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.139774 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.456307 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.456371 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.855472 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 06 14:51:41 crc kubenswrapper[4869]: I0106 14:51:41.855522 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 06 14:51:42 crc kubenswrapper[4869]: I0106 14:51:42.022502 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 06 14:51:42 crc kubenswrapper[4869]: I0106 14:51:42.022563 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 06 14:51:43 crc kubenswrapper[4869]: I0106 14:51:43.408799 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" podUID="da44c856-c228-45b1-947b-891308581bb6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: connect: connection refused" Jan 06 14:51:43 crc kubenswrapper[4869]: E0106 14:51:43.825205 4869 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Jan 06 14:51:44 crc kubenswrapper[4869]: I0106 14:51:44.722825 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 06 14:51:44 crc kubenswrapper[4869]: I0106 14:51:44.722943 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:45 crc kubenswrapper[4869]: I0106 14:51:45.142785 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 06 14:51:45 crc kubenswrapper[4869]: I0106 14:51:45.142841 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 06 14:51:45 crc kubenswrapper[4869]: I0106 14:51:45.691374 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="556f7f3f-b9e0-4e69-a659-5ef5d052a7b4" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.181:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:45 crc kubenswrapper[4869]: I0106 14:51:45.705082 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:51:45 crc kubenswrapper[4869]: E0106 14:51:45.705563 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:51:46 crc kubenswrapper[4869]: E0106 14:51:46.930738 4869 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 06 14:51:49 crc kubenswrapper[4869]: I0106 14:51:49.722823 4869 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Jan 06 14:51:49 crc kubenswrapper[4869]: I0106 14:51:49.723212 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.566872 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.567426 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" containerID="cri-o://d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" gracePeriod=2 Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.688320 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" podUID="9fceb23f-1f65-40c7-b8e9-3de1097ecee2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": dial tcp 10.217.0.53:8081: connect: connection refused" Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.752245 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" podUID="4a2ad023-66f0-45bc-9bea-b64cca26c388" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": dial tcp 10.217.0.55:8081: connect: connection refused" Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.832066 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" podUID="a9cad33b-8b9c-434b-9e28-f730ca0cba42" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": dial tcp 10.217.0.58:8081: connect: connection refused" Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.846926 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" podUID="81a6ac18-5e57-4f17-a5b3-64b76e59f83b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 06 14:51:50 crc kubenswrapper[4869]: I0106 14:51:50.987553 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" podUID="9b55eca9-5342-4826-b2fd-3fe94520e1f2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": dial tcp 10.217.0.61:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.012320 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" podUID="6e523183-ec1a-481e-822e-67c457b448c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.176044 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" podUID="d04195cb-3a00-4785-860d-8bb9537f42b7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.233059 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" podUID="4e8628c6-a97f-48ea-a91a-1ea5257c5e49" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.294895 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" podUID="ea758643-2a27-40e6-8c7f-8b0020e0ad97" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.439748 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" podUID="c634faec-64fc-4d2c-af70-94f85b6fcd59" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": dial tcp 10.217.0.76:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.456969 4869 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.457103 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.472899 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" podUID="995201cd-f7dd-40a5-8854-192f32239e25" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.514096 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" podUID="c0aac0d5-701b-4a75-9bd0-4c9530692565" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.643990 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" podUID="ee1dd5c3-5e85-416c-933a-07fb51ec12d8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.685918 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" podUID="def35933-1964-4328-a9b2-dc9f72d11bcf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.811250 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" podUID="3f4a328b-302b-496b-af2b-abec609682a6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": dial tcp 10.217.0.83:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.855415 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" podUID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.886250 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" podUID="2ad69939-a56e-4589-bf4b-68fb8d42d7eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": dial tcp 10.217.0.86:8081: connect: connection refused" Jan 06 14:51:51 crc kubenswrapper[4869]: I0106 14:51:51.919343 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" podUID="5427b0d1-29a3-47c0-9a1a-a945063ae129" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.022330 4869 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" podUID="7a343a23-f2df-474c-842c-f999f7d0e9b4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.207113 4869 generic.go:334] "Generic (PLEG): container finished" podID="4e8628c6-a97f-48ea-a91a-1ea5257c5e49" containerID="2094401fdc20524112e40969c5e5d8e0441073fe12e6722481d52421edfbb04f" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.207166 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" event={"ID":"4e8628c6-a97f-48ea-a91a-1ea5257c5e49","Type":"ContainerDied","Data":"2094401fdc20524112e40969c5e5d8e0441073fe12e6722481d52421edfbb04f"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.207800 4869 scope.go:117] "RemoveContainer" containerID="2094401fdc20524112e40969c5e5d8e0441073fe12e6722481d52421edfbb04f" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.209652 4869 generic.go:334] "Generic (PLEG): container finished" podID="6e523183-ec1a-481e-822e-67c457b448c0" containerID="0ec9ebfd27586d3ab0878f2a7aba831cd2653a1a18fa56f75efd1f6aaa871881" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.209873 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" event={"ID":"6e523183-ec1a-481e-822e-67c457b448c0","Type":"ContainerDied","Data":"0ec9ebfd27586d3ab0878f2a7aba831cd2653a1a18fa56f75efd1f6aaa871881"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.210118 4869 scope.go:117] "RemoveContainer" containerID="0ec9ebfd27586d3ab0878f2a7aba831cd2653a1a18fa56f75efd1f6aaa871881" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.212579 4869 generic.go:334] "Generic (PLEG): container finished" podID="c0aac0d5-701b-4a75-9bd0-4c9530692565" containerID="8af8f0fb352df65df58e4f81a8fd8feba04c0232c3c4768195c8469a7fe6250d" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.212637 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" event={"ID":"c0aac0d5-701b-4a75-9bd0-4c9530692565","Type":"ContainerDied","Data":"8af8f0fb352df65df58e4f81a8fd8feba04c0232c3c4768195c8469a7fe6250d"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.213226 4869 scope.go:117] "RemoveContainer" containerID="8af8f0fb352df65df58e4f81a8fd8feba04c0232c3c4768195c8469a7fe6250d" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.214587 4869 generic.go:334] "Generic (PLEG): container finished" podID="def35933-1964-4328-a9b2-dc9f72d11bcf" containerID="f1a51149b3e31b8a4eef70504cfb37d3e6400af3cbe102c543e75ccdbced57f7" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.214622 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" event={"ID":"def35933-1964-4328-a9b2-dc9f72d11bcf","Type":"ContainerDied","Data":"f1a51149b3e31b8a4eef70504cfb37d3e6400af3cbe102c543e75ccdbced57f7"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.214870 4869 scope.go:117] "RemoveContainer" containerID="f1a51149b3e31b8a4eef70504cfb37d3e6400af3cbe102c543e75ccdbced57f7" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.216578 4869 generic.go:334] "Generic (PLEG): container finished" podID="ee1dd5c3-5e85-416c-933a-07fb51ec12d8" containerID="daf5f32aaff8d77ac8accf864a4306f102ae320925b9bf416979b73d5a3c019d" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.216615 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" event={"ID":"ee1dd5c3-5e85-416c-933a-07fb51ec12d8","Type":"ContainerDied","Data":"daf5f32aaff8d77ac8accf864a4306f102ae320925b9bf416979b73d5a3c019d"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.216894 4869 scope.go:117] "RemoveContainer" containerID="daf5f32aaff8d77ac8accf864a4306f102ae320925b9bf416979b73d5a3c019d" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.220501 4869 generic.go:334] "Generic (PLEG): container finished" podID="2ad69939-a56e-4589-bf4b-68fb8d42d7eb" containerID="6ab3451d12a903658e16b8ee97a48f83819c8c9c2ed2d4cf06a497bd27fc912a" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.220554 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" event={"ID":"2ad69939-a56e-4589-bf4b-68fb8d42d7eb","Type":"ContainerDied","Data":"6ab3451d12a903658e16b8ee97a48f83819c8c9c2ed2d4cf06a497bd27fc912a"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.220942 4869 scope.go:117] "RemoveContainer" containerID="6ab3451d12a903658e16b8ee97a48f83819c8c9c2ed2d4cf06a497bd27fc912a" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.223949 4869 generic.go:334] "Generic (PLEG): container finished" podID="4a2ad023-66f0-45bc-9bea-b64cca26c388" containerID="949155fa791eb5534c850d723dd4783c896fa6a3efcf9b39412570c56cd12ba1" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.224035 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" event={"ID":"4a2ad023-66f0-45bc-9bea-b64cca26c388","Type":"ContainerDied","Data":"949155fa791eb5534c850d723dd4783c896fa6a3efcf9b39412570c56cd12ba1"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.224819 4869 scope.go:117] "RemoveContainer" containerID="949155fa791eb5534c850d723dd4783c896fa6a3efcf9b39412570c56cd12ba1" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.232784 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.236394 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.236635 4869 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9e51d67536462a19489ffde362436cceddb8498c573094ae66f23aaac02e955a" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.236772 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9e51d67536462a19489ffde362436cceddb8498c573094ae66f23aaac02e955a"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.236842 4869 scope.go:117] "RemoveContainer" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.237661 4869 scope.go:117] "RemoveContainer" containerID="9e51d67536462a19489ffde362436cceddb8498c573094ae66f23aaac02e955a" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.240650 4869 generic.go:334] "Generic (PLEG): container finished" podID="ea758643-2a27-40e6-8c7f-8b0020e0ad97" containerID="b24be7dfa2611785b1f30549d689bd09271f8196d9e4035815689895d7b4bf4f" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.240691 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" event={"ID":"ea758643-2a27-40e6-8c7f-8b0020e0ad97","Type":"ContainerDied","Data":"b24be7dfa2611785b1f30549d689bd09271f8196d9e4035815689895d7b4bf4f"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.241816 4869 scope.go:117] "RemoveContainer" containerID="b24be7dfa2611785b1f30549d689bd09271f8196d9e4035815689895d7b4bf4f" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.244351 4869 generic.go:334] "Generic (PLEG): container finished" podID="7a343a23-f2df-474c-842c-f999f7d0e9b4" containerID="96741600da11a25be327053e9cc07a4e5c4c6b7ae3ec4201dc556cc3152339ab" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.244396 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" event={"ID":"7a343a23-f2df-474c-842c-f999f7d0e9b4","Type":"ContainerDied","Data":"96741600da11a25be327053e9cc07a4e5c4c6b7ae3ec4201dc556cc3152339ab"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.245356 4869 scope.go:117] "RemoveContainer" containerID="96741600da11a25be327053e9cc07a4e5c4c6b7ae3ec4201dc556cc3152339ab" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.249411 4869 generic.go:334] "Generic (PLEG): container finished" podID="a9cad33b-8b9c-434b-9e28-f730ca0cba42" containerID="5b62c5f7f7c95e431d459e853b2901c2c02918da47fcb5d01a0313d3fc9a1065" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.249488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" event={"ID":"a9cad33b-8b9c-434b-9e28-f730ca0cba42","Type":"ContainerDied","Data":"5b62c5f7f7c95e431d459e853b2901c2c02918da47fcb5d01a0313d3fc9a1065"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.250133 4869 scope.go:117] "RemoveContainer" containerID="5b62c5f7f7c95e431d459e853b2901c2c02918da47fcb5d01a0313d3fc9a1065" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.258030 4869 generic.go:334] "Generic (PLEG): container finished" podID="81a6ac18-5e57-4f17-a5b3-64b76e59f83b" containerID="175c5b23ba4dbfcdff4243e7ee9bb10f0a5125b40dd0e22981c86bb049bf4066" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.258142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" event={"ID":"81a6ac18-5e57-4f17-a5b3-64b76e59f83b","Type":"ContainerDied","Data":"175c5b23ba4dbfcdff4243e7ee9bb10f0a5125b40dd0e22981c86bb049bf4066"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.258904 4869 scope.go:117] "RemoveContainer" containerID="175c5b23ba4dbfcdff4243e7ee9bb10f0a5125b40dd0e22981c86bb049bf4066" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.262195 4869 generic.go:334] "Generic (PLEG): container finished" podID="ff01227e-d9f4-4dd0-bc22-455a00294406" containerID="88e7dbe5dcc41aaa2fd593d6a1e5a133e65a3754d5f207e84936f72261c16f12" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.262268 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" event={"ID":"ff01227e-d9f4-4dd0-bc22-455a00294406","Type":"ContainerDied","Data":"88e7dbe5dcc41aaa2fd593d6a1e5a133e65a3754d5f207e84936f72261c16f12"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.262879 4869 scope.go:117] "RemoveContainer" containerID="88e7dbe5dcc41aaa2fd593d6a1e5a133e65a3754d5f207e84936f72261c16f12" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.267528 4869 generic.go:334] "Generic (PLEG): container finished" podID="995201cd-f7dd-40a5-8854-192f32239e25" containerID="02f3dcea7c0a0fe08b22873c8349da278225601cd270519697bf69bb4ff1fb69" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.267609 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" event={"ID":"995201cd-f7dd-40a5-8854-192f32239e25","Type":"ContainerDied","Data":"02f3dcea7c0a0fe08b22873c8349da278225601cd270519697bf69bb4ff1fb69"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.268217 4869 scope.go:117] "RemoveContainer" containerID="02f3dcea7c0a0fe08b22873c8349da278225601cd270519697bf69bb4ff1fb69" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.292309 4869 generic.go:334] "Generic (PLEG): container finished" podID="c634faec-64fc-4d2c-af70-94f85b6fcd59" containerID="06aa3378a0519e48931ccfb5f5c95f789cf6fc251e98fa7e4d2eb15a996b94af" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.292408 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" event={"ID":"c634faec-64fc-4d2c-af70-94f85b6fcd59","Type":"ContainerDied","Data":"06aa3378a0519e48931ccfb5f5c95f789cf6fc251e98fa7e4d2eb15a996b94af"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.292777 4869 scope.go:117] "RemoveContainer" containerID="06aa3378a0519e48931ccfb5f5c95f789cf6fc251e98fa7e4d2eb15a996b94af" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.303110 4869 generic.go:334] "Generic (PLEG): container finished" podID="5427b0d1-29a3-47c0-9a1a-a945063ae129" containerID="86defbb06bbee1d7228bde07e3a9f11d482f3201f21cfe1f7ec248776706092d" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.303191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" event={"ID":"5427b0d1-29a3-47c0-9a1a-a945063ae129","Type":"ContainerDied","Data":"86defbb06bbee1d7228bde07e3a9f11d482f3201f21cfe1f7ec248776706092d"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.307691 4869 scope.go:117] "RemoveContainer" containerID="86defbb06bbee1d7228bde07e3a9f11d482f3201f21cfe1f7ec248776706092d" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.315741 4869 generic.go:334] "Generic (PLEG): container finished" podID="9fceb23f-1f65-40c7-b8e9-3de1097ecee2" containerID="17b862099d24e1d8a052ddf4dfb6d65190c6361dc11de9fc6191530561a85ff0" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.315799 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" event={"ID":"9fceb23f-1f65-40c7-b8e9-3de1097ecee2","Type":"ContainerDied","Data":"17b862099d24e1d8a052ddf4dfb6d65190c6361dc11de9fc6191530561a85ff0"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.316260 4869 scope.go:117] "RemoveContainer" containerID="17b862099d24e1d8a052ddf4dfb6d65190c6361dc11de9fc6191530561a85ff0" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.322130 4869 generic.go:334] "Generic (PLEG): container finished" podID="32b4a497-f056-4c29-890a-bb5616a79adf" containerID="a69cd355a731e8491b44a5d1d0ca68bb8ec440178906c72ef06f27441621c7cd" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.322177 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" event={"ID":"32b4a497-f056-4c29-890a-bb5616a79adf","Type":"ContainerDied","Data":"a69cd355a731e8491b44a5d1d0ca68bb8ec440178906c72ef06f27441621c7cd"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.322582 4869 scope.go:117] "RemoveContainer" containerID="a69cd355a731e8491b44a5d1d0ca68bb8ec440178906c72ef06f27441621c7cd" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.334535 4869 generic.go:334] "Generic (PLEG): container finished" podID="9b55eca9-5342-4826-b2fd-3fe94520e1f2" containerID="4c29b67376d2afa3d18c49c426a86a353d188749f80f6233bd4e8aef8ffe932c" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.334606 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" event={"ID":"9b55eca9-5342-4826-b2fd-3fe94520e1f2","Type":"ContainerDied","Data":"4c29b67376d2afa3d18c49c426a86a353d188749f80f6233bd4e8aef8ffe932c"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.335184 4869 scope.go:117] "RemoveContainer" containerID="4c29b67376d2afa3d18c49c426a86a353d188749f80f6233bd4e8aef8ffe932c" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.340195 4869 generic.go:334] "Generic (PLEG): container finished" podID="24ca9405-001a-4beb-a0fa-0f3775dab087" containerID="a9b92837966625496cfe8f64fb1439f22510adbce8ac6feedfbabb4814a48999" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.340289 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" event={"ID":"24ca9405-001a-4beb-a0fa-0f3775dab087","Type":"ContainerDied","Data":"a9b92837966625496cfe8f64fb1439f22510adbce8ac6feedfbabb4814a48999"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.341044 4869 scope.go:117] "RemoveContainer" containerID="a9b92837966625496cfe8f64fb1439f22510adbce8ac6feedfbabb4814a48999" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.342833 4869 generic.go:334] "Generic (PLEG): container finished" podID="b295076d-930c-4a2b-9ba5-3cee1623e268" containerID="746773a4c23917dfd990a920cd30184a0ca2ebb5ab87510164829d449a686016" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.343183 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" event={"ID":"b295076d-930c-4a2b-9ba5-3cee1623e268","Type":"ContainerDied","Data":"746773a4c23917dfd990a920cd30184a0ca2ebb5ab87510164829d449a686016"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.343465 4869 scope.go:117] "RemoveContainer" containerID="746773a4c23917dfd990a920cd30184a0ca2ebb5ab87510164829d449a686016" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.355042 4869 generic.go:334] "Generic (PLEG): container finished" podID="da44c856-c228-45b1-947b-891308581bb6" containerID="2e6b86b22f56e9439e4c3065e3c39bb524c290940fde5c98911ccc693e4112af" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.355145 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" event={"ID":"da44c856-c228-45b1-947b-891308581bb6","Type":"ContainerDied","Data":"2e6b86b22f56e9439e4c3065e3c39bb524c290940fde5c98911ccc693e4112af"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.355766 4869 scope.go:117] "RemoveContainer" containerID="2e6b86b22f56e9439e4c3065e3c39bb524c290940fde5c98911ccc693e4112af" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.359640 4869 generic.go:334] "Generic (PLEG): container finished" podID="81de01b0-a48a-4ca7-8509-9d12c5cb27da" containerID="159c2e71bc756c346e605646e62c9144596d3721107de6825b074fec2cb80137" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.359735 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" event={"ID":"81de01b0-a48a-4ca7-8509-9d12c5cb27da","Type":"ContainerDied","Data":"159c2e71bc756c346e605646e62c9144596d3721107de6825b074fec2cb80137"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.360826 4869 scope.go:117] "RemoveContainer" containerID="159c2e71bc756c346e605646e62c9144596d3721107de6825b074fec2cb80137" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.372033 4869 generic.go:334] "Generic (PLEG): container finished" podID="d04195cb-3a00-4785-860d-8bb9537f42b7" containerID="1407457f7e705c4e431efd5a48d257de0ce1a75c53ea6946c24675cf4a98f998" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.372091 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" event={"ID":"d04195cb-3a00-4785-860d-8bb9537f42b7","Type":"ContainerDied","Data":"1407457f7e705c4e431efd5a48d257de0ce1a75c53ea6946c24675cf4a98f998"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.372631 4869 scope.go:117] "RemoveContainer" containerID="1407457f7e705c4e431efd5a48d257de0ce1a75c53ea6946c24675cf4a98f998" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.377907 4869 generic.go:334] "Generic (PLEG): container finished" podID="e39be0e5-0e29-45cd-925b-6eafb2b385a9" containerID="3b40c96ceb25260245edaa8f4b527cb38c9d12d6147a5203757d76f8b77b13d5" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.377963 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" event={"ID":"e39be0e5-0e29-45cd-925b-6eafb2b385a9","Type":"ContainerDied","Data":"3b40c96ceb25260245edaa8f4b527cb38c9d12d6147a5203757d76f8b77b13d5"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.378267 4869 scope.go:117] "RemoveContainer" containerID="3b40c96ceb25260245edaa8f4b527cb38c9d12d6147a5203757d76f8b77b13d5" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.379796 4869 generic.go:334] "Generic (PLEG): container finished" podID="3f4a328b-302b-496b-af2b-abec609682a6" containerID="d35f27a17e46ec9e28b3de1a927d185e2d8b13c27d6a85662be2391aa7c6403e" exitCode=1 Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.379818 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" event={"ID":"3f4a328b-302b-496b-af2b-abec609682a6","Type":"ContainerDied","Data":"d35f27a17e46ec9e28b3de1a927d185e2d8b13c27d6a85662be2391aa7c6403e"} Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.380076 4869 scope.go:117] "RemoveContainer" containerID="d35f27a17e46ec9e28b3de1a927d185e2d8b13c27d6a85662be2391aa7c6403e" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.441367 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.527844 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.527918 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.896034 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:51:52 crc kubenswrapper[4869]: I0106 14:51:52.896126 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.092578 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.392040 4869 generic.go:334] "Generic (PLEG): container finished" podID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" exitCode=0 Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.392079 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerDied","Data":"d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14"} Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.408408 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.408479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.724551 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:51:53 crc kubenswrapper[4869]: I0106 14:51:53.724639 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:51:54 crc kubenswrapper[4869]: E0106 14:51:54.858324 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:51:54 crc kubenswrapper[4869]: E0106 14:51:54.859563 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:51:54 crc kubenswrapper[4869]: E0106 14:51:54.860082 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:51:54 crc kubenswrapper[4869]: E0106 14:51:54.860167 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:51:55 crc kubenswrapper[4869]: I0106 14:51:55.142708 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:51:55 crc kubenswrapper[4869]: I0106 14:51:55.679788 4869 scope.go:117] "RemoveContainer" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" Jan 06 14:51:56 crc kubenswrapper[4869]: I0106 14:51:56.704984 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:51:56 crc kubenswrapper[4869]: E0106 14:51:56.705457 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.687246 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.687635 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.751903 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.754295 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.831792 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.831891 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.846021 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.846055 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.987051 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:52:00 crc kubenswrapper[4869]: I0106 14:52:00.987114 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.011722 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.011785 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.175240 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.175388 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.232336 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.232451 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.294233 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.294300 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.439989 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.440043 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.456152 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.472806 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.473658 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.514115 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.514165 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.643481 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.643588 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.685024 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.685074 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.811270 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.811741 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.854479 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.854589 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.885376 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.885441 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.919140 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:52:01 crc kubenswrapper[4869]: I0106 14:52:01.919226 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:52:02 crc kubenswrapper[4869]: I0106 14:52:02.022180 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:52:02 crc kubenswrapper[4869]: I0106 14:52:02.022248 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:52:04 crc kubenswrapper[4869]: E0106 14:52:04.858471 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:04 crc kubenswrapper[4869]: E0106 14:52:04.859387 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:04 crc kubenswrapper[4869]: E0106 14:52:04.859858 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:04 crc kubenswrapper[4869]: E0106 14:52:04.859985 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:52:07 crc kubenswrapper[4869]: I0106 14:52:07.704642 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:52:07 crc kubenswrapper[4869]: E0106 14:52:07.705244 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:52:14 crc kubenswrapper[4869]: E0106 14:52:14.859017 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:14 crc kubenswrapper[4869]: E0106 14:52:14.861552 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:14 crc kubenswrapper[4869]: E0106 14:52:14.862233 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:14 crc kubenswrapper[4869]: E0106 14:52:14.862296 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:52:20 crc kubenswrapper[4869]: I0106 14:52:20.705012 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:52:20 crc kubenswrapper[4869]: E0106 14:52:20.705961 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:52:24 crc kubenswrapper[4869]: E0106 14:52:24.858540 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:24 crc kubenswrapper[4869]: E0106 14:52:24.859766 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:24 crc kubenswrapper[4869]: E0106 14:52:24.860433 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:24 crc kubenswrapper[4869]: E0106 14:52:24.860509 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:52:33 crc kubenswrapper[4869]: I0106 14:52:33.705554 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:52:33 crc kubenswrapper[4869]: E0106 14:52:33.706801 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:52:34 crc kubenswrapper[4869]: E0106 14:52:34.858949 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:34 crc kubenswrapper[4869]: E0106 14:52:34.859904 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:34 crc kubenswrapper[4869]: E0106 14:52:34.860823 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:34 crc kubenswrapper[4869]: E0106 14:52:34.861133 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:52:44 crc kubenswrapper[4869]: E0106 14:52:44.858694 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:44 crc kubenswrapper[4869]: E0106 14:52:44.860073 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:44 crc kubenswrapper[4869]: E0106 14:52:44.860573 4869 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" cmd=["grpc_health_probe","-addr=:50051"] Jan 06 14:52:44 crc kubenswrapper[4869]: E0106 14:52:44.860634 4869 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-8lptv" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:52:48 crc kubenswrapper[4869]: I0106 14:52:48.705248 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:52:48 crc kubenswrapper[4869]: E0106 14:52:48.706114 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:52:49 crc kubenswrapper[4869]: E0106 14:52:49.467628 4869 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78: identifier is not a container" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" Jan 06 14:52:49 crc kubenswrapper[4869]: E0106 14:52:49.467955 4869 kuberuntime_gc.go:150] "Failed to remove container" err="rpc error: code = Unknown desc = failed to delete container k8s_kube-controller-manager_kube-controller-manager-crc_openshift-kube-controller-manager_f614b9022728cf315e60c057852e563e_0 in pod sandbox 5c5f9c2a794cb1677d310cd128d7da3844d19fd22b4843f421d69295fdde2b78: identifier is not a container" containerID="0691d53b8f75c65c7afbf74c6a46e97d168af4a60da259d2c20a1c6d1cc380e8" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.658191 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.812766 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content\") pod \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.813192 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities\") pod \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.813264 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqpws\" (UniqueName: \"kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws\") pod \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\" (UID: \"bdf02852-98f1-4f56-b0b4-593d1ada3dc0\") " Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.814413 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities" (OuterVolumeSpecName: "utilities") pod "bdf02852-98f1-4f56-b0b4-593d1ada3dc0" (UID: "bdf02852-98f1-4f56-b0b4-593d1ada3dc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.821872 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws" (OuterVolumeSpecName: "kube-api-access-xqpws") pod "bdf02852-98f1-4f56-b0b4-593d1ada3dc0" (UID: "bdf02852-98f1-4f56-b0b4-593d1ada3dc0"). InnerVolumeSpecName "kube-api-access-xqpws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.843155 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bdf02852-98f1-4f56-b0b4-593d1ada3dc0" (UID: "bdf02852-98f1-4f56-b0b4-593d1ada3dc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.916306 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqpws\" (UniqueName: \"kubernetes.io/projected/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-kube-api-access-xqpws\") on node \"crc\" DevicePath \"\"" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.916353 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:52:49 crc kubenswrapper[4869]: I0106 14:52:49.916363 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bdf02852-98f1-4f56-b0b4-593d1ada3dc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.087757 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.092572 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" event={"ID":"2ad69939-a56e-4589-bf4b-68fb8d42d7eb","Type":"ContainerStarted","Data":"21f401c302ce4bc1116d9ec6d0f5d30ecc2e2046d73321087614ecf742dc2a20"} Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.092657 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.101420 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" event={"ID":"c634faec-64fc-4d2c-af70-94f85b6fcd59","Type":"ContainerStarted","Data":"2aa81f85ed253499667631b9c466f91c1c370e4321de0cd89b3d00e46f65201f"} Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.101470 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.107241 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lptv" event={"ID":"bdf02852-98f1-4f56-b0b4-593d1ada3dc0","Type":"ContainerDied","Data":"7611058e4183f6195c4c29c451fa6c2a4ca509677548753fdfa5f78e9bfd452c"} Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.107288 4869 scope.go:117] "RemoveContainer" containerID="d13b5051d1e0b6e4d6d284caa9dbd749022a19fad9c6d79bfb3dee7a01f81a14" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.107305 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lptv" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.148500 4869 scope.go:117] "RemoveContainer" containerID="b5d2d1f07ff21cd3425fd7c74f83ac7c3c5288c3573c6dba1fb4efe2fde2960b" Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.186576 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.198335 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lptv"] Jan 06 14:52:50 crc kubenswrapper[4869]: I0106 14:52:50.230152 4869 scope.go:117] "RemoveContainer" containerID="d93e2a308cc78a9355e3a58e4a6f023af4f7b4d00bba518711323dc31731900b" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.118465 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" event={"ID":"a9cad33b-8b9c-434b-9e28-f730ca0cba42","Type":"ContainerStarted","Data":"30f74308aae0c13d8243bc379744f7de380f32f4382c1603a2ea53597f6c9456"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.118780 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.120721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" event={"ID":"b295076d-930c-4a2b-9ba5-3cee1623e268","Type":"ContainerStarted","Data":"848b9ad7214118b73f5930b50f275026f215d3ec69ce858d483d899502e99b40"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.120880 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.123128 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" event={"ID":"4a2ad023-66f0-45bc-9bea-b64cca26c388","Type":"ContainerStarted","Data":"bc053929566d3a70720c828dda0a447ec19674e7bd7057751b2a97f7da812ec2"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.123264 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.125692 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" event={"ID":"81de01b0-a48a-4ca7-8509-9d12c5cb27da","Type":"ContainerStarted","Data":"e79c27dafa646043606bc01bfe299032596f4e89a602897c9118164f8220a543"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.125816 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.127535 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" event={"ID":"9fceb23f-1f65-40c7-b8e9-3de1097ecee2","Type":"ContainerStarted","Data":"b96c653e83cbc934ff3eaa26e2ade2702853849d8fae1405788bb7ac341a85a4"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.127725 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.129210 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" event={"ID":"c0aac0d5-701b-4a75-9bd0-4c9530692565","Type":"ContainerStarted","Data":"7a494f501209ba236aa0939f97388ef41c9168e87214d8e786e39add3de9e7d1"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.129311 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.131142 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" event={"ID":"e39be0e5-0e29-45cd-925b-6eafb2b385a9","Type":"ContainerStarted","Data":"3c61228b0fd80aa8931520ffb8c0ab4cc34536fd14eafd8d15cb7429bbd708e2"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.135076 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" event={"ID":"da44c856-c228-45b1-947b-891308581bb6","Type":"ContainerStarted","Data":"98b104921838e47d41994ea5c45c6e4e7a8148ee1e12ed0b09d973c6a434e41c"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.135255 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.137161 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" event={"ID":"def35933-1964-4328-a9b2-dc9f72d11bcf","Type":"ContainerStarted","Data":"bb2e255d2ceec527a72f732385faba1d9bf01dd4ba23d517016be08f40b53aa7"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.137283 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.139295 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jtv5n" event={"ID":"ff01227e-d9f4-4dd0-bc22-455a00294406","Type":"ContainerStarted","Data":"d7036427ae69f9994490e50dc57fdc5a439bd2f519ef2a2ab3a0b718bda54fc8"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.141767 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.142983 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"87f14fc5e4a5a585b9cdd0431dcf0fc79590156d91b1f49079507cfc6c917c5d"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.145803 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" event={"ID":"3f4a328b-302b-496b-af2b-abec609682a6","Type":"ContainerStarted","Data":"ba2a4b95765571f8348223984bd1652b1d74cd54cf0387f9a88db0befc3920ff"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.145965 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.148739 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" event={"ID":"4e8628c6-a97f-48ea-a91a-1ea5257c5e49","Type":"ContainerStarted","Data":"9fae1f3baa3633342e398f500045a220800a14d39604f2d6b0b3c075fd0212ef"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.148804 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.153951 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" event={"ID":"7a343a23-f2df-474c-842c-f999f7d0e9b4","Type":"ContainerStarted","Data":"7588108b2ae2f92a0318bf5ccd98f609d88f4f1af730ef2cdd0c514158635a10"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.154741 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.156600 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" event={"ID":"81a6ac18-5e57-4f17-a5b3-64b76e59f83b","Type":"ContainerStarted","Data":"7c8f36c4010690acfd9242ab4d310d6623e4e4114383e0490e8652b43c094088"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.156850 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.173872 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" event={"ID":"24ca9405-001a-4beb-a0fa-0f3775dab087","Type":"ContainerStarted","Data":"70d07eaa98618a8a6e942e5cca4923f77011b73d4022599da9a7eb9a507dcc01"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.174706 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.201203 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" event={"ID":"ee1dd5c3-5e85-416c-933a-07fb51ec12d8","Type":"ContainerStarted","Data":"6555f42e39b5a5cfd403c8835203c3fcff01fdf2bc3f66ce36ec6e63905ea698"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.201243 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.211577 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" event={"ID":"995201cd-f7dd-40a5-8854-192f32239e25","Type":"ContainerStarted","Data":"42323e3e7548a69425db25e31d67a543582af740a15661f03779e977349161a6"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.212683 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.214898 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" event={"ID":"5427b0d1-29a3-47c0-9a1a-a945063ae129","Type":"ContainerStarted","Data":"481956caec4c738ce52a94035139ceb7097a57f53935da7c2645474e781e24f9"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.215899 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.218245 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" event={"ID":"d04195cb-3a00-4785-860d-8bb9537f42b7","Type":"ContainerStarted","Data":"caecc5426b8e091ed40058ee4d9c1624ea12e4606f50e5a42fb785846688c5cc"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.219064 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.222202 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" event={"ID":"6e523183-ec1a-481e-822e-67c457b448c0","Type":"ContainerStarted","Data":"9a495d6336161cc7bb0389cdfc9c85ba3b4c40c8e055ef01b5ce68450fc5fb63"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.222494 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.308227 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"72583624e13627d6aae08917ea97d1e4ea3d31c7a471e7b424eab70e0e534939"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.309161 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"60e4a14d85b0924b8a1489568f909c8249d7bc8d624969226ccabd333a1fdf7b"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.309249 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerName="ceilometer-notification-agent" containerID="cri-o://60e4a14d85b0924b8a1489568f909c8249d7bc8d624969226ccabd333a1fdf7b" gracePeriod=30 Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.342341 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" event={"ID":"ea758643-2a27-40e6-8c7f-8b0020e0ad97","Type":"ContainerStarted","Data":"144ff2459ea5047c8eef53bf14b39b0b547b832599defb883861d31e42a8fec4"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.343464 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.391844 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" event={"ID":"32b4a497-f056-4c29-890a-bb5616a79adf","Type":"ContainerStarted","Data":"66f3c3a70d5b06648b9664e9f915363ca8b90f878748294e02bec0e7bd258f08"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.406791 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.443802 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" event={"ID":"9b55eca9-5342-4826-b2fd-3fe94520e1f2","Type":"ContainerStarted","Data":"964f2bdf89644b49f99d9b625dc750ac9dad37aa1896d7be46c9d2a10345220c"} Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.457582 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.458490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.730384 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" path="/var/lib/kubelet/pods/bdf02852-98f1-4f56-b0b4-593d1ada3dc0/volumes" Jan 06 14:52:51 crc kubenswrapper[4869]: I0106 14:52:51.854328 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:52:53 crc kubenswrapper[4869]: I0106 14:52:53.092052 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:52:53 crc kubenswrapper[4869]: I0106 14:52:53.096080 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:52:56 crc kubenswrapper[4869]: I0106 14:52:56.479126 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:52:56 crc kubenswrapper[4869]: I0106 14:52:56.489441 4869 generic.go:334] "Generic (PLEG): container finished" podID="cdd7985d-7085-4e06-9be1-e35e94d9c544" containerID="60e4a14d85b0924b8a1489568f909c8249d7bc8d624969226ccabd333a1fdf7b" exitCode=0 Jan 06 14:52:56 crc kubenswrapper[4869]: I0106 14:52:56.489488 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerDied","Data":"60e4a14d85b0924b8a1489568f909c8249d7bc8d624969226ccabd333a1fdf7b"} Jan 06 14:52:58 crc kubenswrapper[4869]: I0106 14:52:58.530051 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cdd7985d-7085-4e06-9be1-e35e94d9c544","Type":"ContainerStarted","Data":"81b62d2e44fa03bf6ad4d19104a85df761bbd77f2e48ed63dfbb1ea5b059308a"} Jan 06 14:53:00 crc kubenswrapper[4869]: I0106 14:53:00.691074 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-f6f74d6db-5tjdn" Jan 06 14:53:00 crc kubenswrapper[4869]: I0106 14:53:00.756228 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66f8b87655-g7gcq" Jan 06 14:53:00 crc kubenswrapper[4869]: I0106 14:53:00.834129 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7596f46b97-l75w2" Jan 06 14:53:00 crc kubenswrapper[4869]: I0106 14:53:00.850073 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-658dd65b86-hcm2g" Jan 06 14:53:00 crc kubenswrapper[4869]: I0106 14:53:00.990733 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7f5ddd8d7b-npl5f" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.017749 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-78979fc445-2qx58" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.177268 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-f99f54bc8-g6xt2" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.235439 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c8fb65dbf-55rl9" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.296736 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-598945d5b8-t4dkz" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.443315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b88bfc995-pm7np" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.464315 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.477917 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cd87b778f-7lbqn" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.516348 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5fbbf8b6cc-n78kg" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.645518 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bf6d4f946-wl9w7" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.687924 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-9b6f8f78c-p249l" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.722419 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:53:01 crc kubenswrapper[4869]: E0106 14:53:01.722704 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.813860 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bb586bbf4-5ltk8" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.857094 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-68c649d9d-r4ck9" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.888744 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-9dbdf6486-csthh" Jan 06 14:53:01 crc kubenswrapper[4869]: I0106 14:53:01.923291 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-68d988df55-d2jnv" Jan 06 14:53:02 crc kubenswrapper[4869]: I0106 14:53:02.026540 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6c866cfdcb-45sp6" Jan 06 14:53:02 crc kubenswrapper[4869]: I0106 14:53:02.527244 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-596cb89f89-kplkv" Jan 06 14:53:02 crc kubenswrapper[4869]: I0106 14:53:02.904446 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6d99759cf-t68w7" Jan 06 14:53:03 crc kubenswrapper[4869]: I0106 14:53:03.414300 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-78948ddfd7s8247" Jan 06 14:53:03 crc kubenswrapper[4869]: I0106 14:53:03.726474 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7d77f59d59-zfch2" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.317534 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d4ljj/must-gather-l5pk9"] Jan 06 14:53:08 crc kubenswrapper[4869]: E0106 14:53:08.318649 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8833c85f-4713-4005-a7ad-e3446d62c1cf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.318685 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="8833c85f-4713-4005-a7ad-e3446d62c1cf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 06 14:53:08 crc kubenswrapper[4869]: E0106 14:53:08.318712 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="extract-utilities" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.318721 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="extract-utilities" Jan 06 14:53:08 crc kubenswrapper[4869]: E0106 14:53:08.318739 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.318747 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:53:08 crc kubenswrapper[4869]: E0106 14:53:08.318776 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="extract-content" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.318784 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="extract-content" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.318994 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="8833c85f-4713-4005-a7ad-e3446d62c1cf" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.319018 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdf02852-98f1-4f56-b0b4-593d1ada3dc0" containerName="registry-server" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.320311 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.322297 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-d4ljj"/"default-dockercfg-xgwzf" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.323266 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d4ljj"/"openshift-service-ca.crt" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.326891 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d4ljj/must-gather-l5pk9"] Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.331118 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-d4ljj"/"kube-root-ca.crt" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.431982 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.432041 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2cn\" (UniqueName: \"kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.534100 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.534409 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2cn\" (UniqueName: \"kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.534938 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.578467 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2cn\" (UniqueName: \"kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn\") pod \"must-gather-l5pk9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.640141 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:53:08 crc kubenswrapper[4869]: I0106 14:53:08.939246 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-d4ljj/must-gather-l5pk9"] Jan 06 14:53:08 crc kubenswrapper[4869]: W0106 14:53:08.948642 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb297df44_0f15_44cc_a32c_d64f3aa57eb9.slice/crio-03f0d491601f9872451defe0fe5dc58b041d23d9fd37820d59562c95b9197854 WatchSource:0}: Error finding container 03f0d491601f9872451defe0fe5dc58b041d23d9fd37820d59562c95b9197854: Status 404 returned error can't find the container with id 03f0d491601f9872451defe0fe5dc58b041d23d9fd37820d59562c95b9197854 Jan 06 14:53:09 crc kubenswrapper[4869]: I0106 14:53:09.635567 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" event={"ID":"b297df44-0f15-44cc-a32c-d64f3aa57eb9","Type":"ContainerStarted","Data":"03f0d491601f9872451defe0fe5dc58b041d23d9fd37820d59562c95b9197854"} Jan 06 14:53:12 crc kubenswrapper[4869]: I0106 14:53:12.705050 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:53:12 crc kubenswrapper[4869]: E0106 14:53:12.706464 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:53:18 crc kubenswrapper[4869]: I0106 14:53:18.715854 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" event={"ID":"b297df44-0f15-44cc-a32c-d64f3aa57eb9","Type":"ContainerStarted","Data":"74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd"} Jan 06 14:53:18 crc kubenswrapper[4869]: I0106 14:53:18.716492 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" event={"ID":"b297df44-0f15-44cc-a32c-d64f3aa57eb9","Type":"ContainerStarted","Data":"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4"} Jan 06 14:53:22 crc kubenswrapper[4869]: I0106 14:53:22.445240 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6ccb949c7b-7jw65" Jan 06 14:53:22 crc kubenswrapper[4869]: I0106 14:53:22.473029 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" podStartSLOduration=5.413011539 podStartE2EDuration="14.473011109s" podCreationTimestamp="2026-01-06 14:53:08 +0000 UTC" firstStartedPulling="2026-01-06 14:53:08.951220279 +0000 UTC m=+3207.490907943" lastFinishedPulling="2026-01-06 14:53:18.011219849 +0000 UTC m=+3216.550907513" observedRunningTime="2026-01-06 14:53:18.733966072 +0000 UTC m=+3217.273653726" watchObservedRunningTime="2026-01-06 14:53:22.473011109 +0000 UTC m=+3221.012698773" Jan 06 14:53:23 crc kubenswrapper[4869]: E0106 14:53:23.406857 4869 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.230:48704->38.102.83.230:46819: write tcp 38.102.83.230:48704->38.102.83.230:46819: write: broken pipe Jan 06 14:53:24 crc kubenswrapper[4869]: I0106 14:53:24.797211 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-pcktp"] Jan 06 14:53:24 crc kubenswrapper[4869]: I0106 14:53:24.799151 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:24 crc kubenswrapper[4869]: I0106 14:53:24.913204 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:24 crc kubenswrapper[4869]: I0106 14:53:24.913415 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz85b\" (UniqueName: \"kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.015065 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz85b\" (UniqueName: \"kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.015172 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.015275 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.044484 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz85b\" (UniqueName: \"kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b\") pod \"crc-debug-pcktp\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.117811 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:25 crc kubenswrapper[4869]: I0106 14:53:25.776194 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" event={"ID":"78321e5e-8502-4ca0-87f1-e8833ff7a056","Type":"ContainerStarted","Data":"165dfa8935f8b98b3c9b4ed67afcd3c4525056104a1dd72292d08f94d18cd3eb"} Jan 06 14:53:27 crc kubenswrapper[4869]: I0106 14:53:27.703923 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:53:27 crc kubenswrapper[4869]: E0106 14:53:27.704167 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 14:53:38 crc kubenswrapper[4869]: I0106 14:53:38.899191 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" event={"ID":"78321e5e-8502-4ca0-87f1-e8833ff7a056","Type":"ContainerStarted","Data":"64be19045234c0ac1f684977742c8045985ac3ed623d6719653850456f2b1544"} Jan 06 14:53:38 crc kubenswrapper[4869]: I0106 14:53:38.917726 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" podStartSLOduration=1.9956744830000002 podStartE2EDuration="14.917708464s" podCreationTimestamp="2026-01-06 14:53:24 +0000 UTC" firstStartedPulling="2026-01-06 14:53:25.16711866 +0000 UTC m=+3223.706806314" lastFinishedPulling="2026-01-06 14:53:38.089152631 +0000 UTC m=+3236.628840295" observedRunningTime="2026-01-06 14:53:38.912707172 +0000 UTC m=+3237.452394836" watchObservedRunningTime="2026-01-06 14:53:38.917708464 +0000 UTC m=+3237.457396128" Jan 06 14:53:39 crc kubenswrapper[4869]: I0106 14:53:39.704033 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:53:40 crc kubenswrapper[4869]: I0106 14:53:40.917245 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489"} Jan 06 14:53:57 crc kubenswrapper[4869]: I0106 14:53:57.066445 4869 generic.go:334] "Generic (PLEG): container finished" podID="78321e5e-8502-4ca0-87f1-e8833ff7a056" containerID="64be19045234c0ac1f684977742c8045985ac3ed623d6719653850456f2b1544" exitCode=0 Jan 06 14:53:57 crc kubenswrapper[4869]: I0106 14:53:57.066597 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" event={"ID":"78321e5e-8502-4ca0-87f1-e8833ff7a056","Type":"ContainerDied","Data":"64be19045234c0ac1f684977742c8045985ac3ed623d6719653850456f2b1544"} Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.200929 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.236198 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host\") pod \"78321e5e-8502-4ca0-87f1-e8833ff7a056\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.236369 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz85b\" (UniqueName: \"kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b\") pod \"78321e5e-8502-4ca0-87f1-e8833ff7a056\" (UID: \"78321e5e-8502-4ca0-87f1-e8833ff7a056\") " Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.236735 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-pcktp"] Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.236824 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host" (OuterVolumeSpecName: "host") pod "78321e5e-8502-4ca0-87f1-e8833ff7a056" (UID: "78321e5e-8502-4ca0-87f1-e8833ff7a056"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.236995 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/78321e5e-8502-4ca0-87f1-e8833ff7a056-host\") on node \"crc\" DevicePath \"\"" Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.249908 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-pcktp"] Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.256775 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b" (OuterVolumeSpecName: "kube-api-access-fz85b") pod "78321e5e-8502-4ca0-87f1-e8833ff7a056" (UID: "78321e5e-8502-4ca0-87f1-e8833ff7a056"). InnerVolumeSpecName "kube-api-access-fz85b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:53:58 crc kubenswrapper[4869]: I0106 14:53:58.338799 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz85b\" (UniqueName: \"kubernetes.io/projected/78321e5e-8502-4ca0-87f1-e8833ff7a056-kube-api-access-fz85b\") on node \"crc\" DevicePath \"\"" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.096730 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165dfa8935f8b98b3c9b4ed67afcd3c4525056104a1dd72292d08f94d18cd3eb" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.096825 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-pcktp" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.489382 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-dpd57"] Jan 06 14:53:59 crc kubenswrapper[4869]: E0106 14:53:59.489880 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78321e5e-8502-4ca0-87f1-e8833ff7a056" containerName="container-00" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.489897 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="78321e5e-8502-4ca0-87f1-e8833ff7a056" containerName="container-00" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.490139 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="78321e5e-8502-4ca0-87f1-e8833ff7a056" containerName="container-00" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.490931 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.559431 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjmt\" (UniqueName: \"kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.559507 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.661169 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjmt\" (UniqueName: \"kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.661254 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.661368 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.677860 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjmt\" (UniqueName: \"kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt\") pod \"crc-debug-dpd57\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.716283 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78321e5e-8502-4ca0-87f1-e8833ff7a056" path="/var/lib/kubelet/pods/78321e5e-8502-4ca0-87f1-e8833ff7a056/volumes" Jan 06 14:53:59 crc kubenswrapper[4869]: I0106 14:53:59.810808 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:54:00 crc kubenswrapper[4869]: I0106 14:54:00.106040 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" event={"ID":"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18","Type":"ContainerStarted","Data":"227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4"} Jan 06 14:54:00 crc kubenswrapper[4869]: I0106 14:54:00.106313 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" event={"ID":"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18","Type":"ContainerStarted","Data":"07daef1f881ad8b224db5a87e2594219143cdb8c60e4467c1bc2634056d1e83c"} Jan 06 14:54:00 crc kubenswrapper[4869]: E0106 14:54:00.288055 4869 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1fd9bd_b480_4e25_8c08_b13c9e0c0a18.slice/crio-conmon-227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe1fd9bd_b480_4e25_8c08_b13c9e0c0a18.slice/crio-227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4.scope\": RecentStats: unable to find data in memory cache]" Jan 06 14:54:01 crc kubenswrapper[4869]: I0106 14:54:01.116895 4869 generic.go:334] "Generic (PLEG): container finished" podID="fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" containerID="227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4" exitCode=1 Jan 06 14:54:01 crc kubenswrapper[4869]: I0106 14:54:01.116944 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" event={"ID":"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18","Type":"ContainerDied","Data":"227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4"} Jan 06 14:54:01 crc kubenswrapper[4869]: I0106 14:54:01.158496 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-dpd57"] Jan 06 14:54:01 crc kubenswrapper[4869]: I0106 14:54:01.184194 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d4ljj/crc-debug-dpd57"] Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.212100 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.310005 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjmt\" (UniqueName: \"kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt\") pod \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.310231 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host\") pod \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\" (UID: \"fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18\") " Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.310405 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host" (OuterVolumeSpecName: "host") pod "fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" (UID: "fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.310641 4869 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-host\") on node \"crc\" DevicePath \"\"" Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.315425 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt" (OuterVolumeSpecName: "kube-api-access-hdjmt") pod "fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" (UID: "fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18"). InnerVolumeSpecName "kube-api-access-hdjmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:54:02 crc kubenswrapper[4869]: I0106 14:54:02.412901 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdjmt\" (UniqueName: \"kubernetes.io/projected/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18-kube-api-access-hdjmt\") on node \"crc\" DevicePath \"\"" Jan 06 14:54:03 crc kubenswrapper[4869]: I0106 14:54:03.135362 4869 scope.go:117] "RemoveContainer" containerID="227b51a7b3a7f9e87381cdfd3978c73118ccf493a3131bb04719e3ed070595d4" Jan 06 14:54:03 crc kubenswrapper[4869]: I0106 14:54:03.135587 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/crc-debug-dpd57" Jan 06 14:54:03 crc kubenswrapper[4869]: I0106 14:54:03.716472 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" path="/var/lib/kubelet/pods/fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18/volumes" Jan 06 14:54:57 crc kubenswrapper[4869]: I0106 14:54:57.690037 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8685d8b6-46cdt_170f68c0-a435-4022-8c3b-82f60b06fbac/barbican-api/0.log" Jan 06 14:54:57 crc kubenswrapper[4869]: I0106 14:54:57.844071 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8685d8b6-46cdt_170f68c0-a435-4022-8c3b-82f60b06fbac/barbican-api-log/0.log" Jan 06 14:54:57 crc kubenswrapper[4869]: I0106 14:54:57.983078 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5599cd5d56-8h5sr_fce6b66f-ac24-4b7b-98aa-39a87666921b/barbican-keystone-listener/0.log" Jan 06 14:54:57 crc kubenswrapper[4869]: I0106 14:54:57.997343 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5599cd5d56-8h5sr_fce6b66f-ac24-4b7b-98aa-39a87666921b/barbican-keystone-listener-log/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.153335 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7df5f64db9-vv7hq_a6c68901-bae4-40c6-a65d-a7b0834e2d71/barbican-worker/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.205879 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7df5f64db9-vv7hq_a6c68901-bae4-40c6-a65d-a7b0834e2d71/barbican-worker-log/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.345541 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-6t9q8_99252e16-5d75-4719-84a1-80ef3a8bfa39/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.422247 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/ceilometer-central-agent/1.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.525250 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/ceilometer-central-agent/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.619875 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/ceilometer-notification-agent/1.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.622932 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/ceilometer-notification-agent/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.709057 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/proxy-httpd/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.767505 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cdd7985d-7085-4e06-9be1-e35e94d9c544/sg-core/0.log" Jan 06 14:54:58 crc kubenswrapper[4869]: I0106 14:54:58.823504 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-dnfdh_a9894b84-99cf-4a02-8d21-3795a64be01a/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.013823 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-skpnm_3c8a55f5-3919-44f7-b3b4-54397f2e3b11/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.099704 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdcb07fc-c984-417a-aecb-6f0a2a83f487/cinder-api/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.205748 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bdcb07fc-c984-417a-aecb-6f0a2a83f487/cinder-api-log/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.326180 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0d05e9f4-29bf-4c4b-8930-7346c2f4b33d/cinder-scheduler/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.343852 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0d05e9f4-29bf-4c4b-8930-7346c2f4b33d/probe/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.497864 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-xhxvk_f0b5ad51-bd5e-4805-9c00-4d3fd82a61a1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.549409 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-z6g8b_1d87f359-40bb-40c9-b5f4-9b390767b167/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.718247 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-864d5fc68c-5vx6z_254ec1d5-4669-4f52-b206-ff0c28541337/init/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.902056 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-864d5fc68c-5vx6z_254ec1d5-4669-4f52-b206-ff0c28541337/init/0.log" Jan 06 14:54:59 crc kubenswrapper[4869]: I0106 14:54:59.920495 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-jfkzt_319f344b-5374-42d9-bfea-f25f3717ccf9/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.030527 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-864d5fc68c-5vx6z_254ec1d5-4669-4f52-b206-ff0c28541337/dnsmasq-dns/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.363148 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-xdtdn_3ecdff9b-23e8-4883-9128-37da64316185/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.479235 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5df48645c5-c7ccn_2671efdf-3270-4f9e-8a55-6a6f1f52497e/keystone-api/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.580861 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_556f7f3f-b9e0-4e69-a659-5ef5d052a7b4/kube-state-metrics/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.713460 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qrkbj_bb8a6d75-fe0e-4703-b592-39c4ff9241d5/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:00 crc kubenswrapper[4869]: I0106 14:55:00.935837 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77f9b5db4f-c4t9m_27991635-1274-47d8-b264-0ff73afb91aa/neutron-api/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.037230 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-77f9b5db4f-c4t9m_27991635-1274-47d8-b264-0ff73afb91aa/neutron-httpd/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.189510 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4cdqz_37c9408c-3a8b-4246-a7ec-d2e99d49790f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.565081 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc51cbf9-373a-4c1d-9314-d382cab1b09f/nova-api-log/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.760821 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_dc51cbf9-373a-4c1d-9314-d382cab1b09f/nova-api-api/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.784717 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_48f633eb-c984-48c6-91ec-4b4918036e39/nova-cell0-conductor-conductor/0.log" Jan 06 14:55:01 crc kubenswrapper[4869]: I0106 14:55:01.922255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ca1929b8-a2a1-40cb-81d2-666f2687a69d/nova-cell1-conductor-conductor/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.099457 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_76afcc40-ce0e-43d9-8166-a5c8070f8245/nova-cell1-novncproxy-novncproxy/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.223116 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-4ndxj_8833c85f-4713-4005-a7ad-e3446d62c1cf/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.358204 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec3bfe55-109b-43b1-9cde-46333d3d826d/nova-metadata-log/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.665728 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7614e382-a4ea-473a-bb59-4cf065777f95/nova-scheduler-scheduler/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.724719 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5ecad54-1487-4d25-9bd1-e6e486ba59d5/mysql-bootstrap/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.927953 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5ecad54-1487-4d25-9bd1-e6e486ba59d5/galera/0.log" Jan 06 14:55:02 crc kubenswrapper[4869]: I0106 14:55:02.955509 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b5ecad54-1487-4d25-9bd1-e6e486ba59d5/mysql-bootstrap/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.136455 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_be48d5b3-d81d-4bb6-a7a6-7706d8208db8/mysql-bootstrap/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.354231 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_be48d5b3-d81d-4bb6-a7a6-7706d8208db8/galera/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.386622 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_be48d5b3-d81d-4bb6-a7a6-7706d8208db8/mysql-bootstrap/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.415274 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec3bfe55-109b-43b1-9cde-46333d3d826d/nova-metadata-metadata/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.530857 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_368ebbb8-5558-42d5-a18d-516ff3e623bf/openstackclient/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.689531 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-l72g5_580a8eb0-0af1-4e26-922f-2714f581c604/openstack-network-exporter/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.738969 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-mmg7w_aaa27703-fd83-40d0-a8fb-8d6962212f8f/ovn-controller/0.log" Jan 06 14:55:03 crc kubenswrapper[4869]: I0106 14:55:03.854419 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64n65_15ab1556-2fd1-423a-9759-4c1088500a85/ovsdb-server-init/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.067576 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64n65_15ab1556-2fd1-423a-9759-4c1088500a85/ovs-vswitchd/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.111387 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64n65_15ab1556-2fd1-423a-9759-4c1088500a85/ovsdb-server/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.329745 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-64n65_15ab1556-2fd1-423a-9759-4c1088500a85/ovsdb-server-init/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.516159 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_025cc3c7-4b77-4fcf-b05e-4abde5125639/openstack-network-exporter/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.522636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-d7lg5_b5f1c551-161d-40cc-a7bb-475eab4b0f98/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.573907 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_025cc3c7-4b77-4fcf-b05e-4abde5125639/ovn-northd/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.742199 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7127872e-e183-49cf-a8e2-153197597bea/openstack-network-exporter/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.800441 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7127872e-e183-49cf-a8e2-153197597bea/ovsdbserver-nb/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.935830 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6a6edbf6-4b64-4319-b863-6e9e5f08746f/ovsdbserver-sb/0.log" Jan 06 14:55:04 crc kubenswrapper[4869]: I0106 14:55:04.967990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6a6edbf6-4b64-4319-b863-6e9e5f08746f/openstack-network-exporter/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.056142 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-749bc7d596-scpc9_9930dd36-d171-453b-ad1a-7344e6ddb59a/placement-api/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.304281 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-749bc7d596-scpc9_9930dd36-d171-453b-ad1a-7344e6ddb59a/placement-log/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.318333 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8efa0859-d909-40a3-8868-2cee1b98f0dd/setup-container/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.456035 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8efa0859-d909-40a3-8868-2cee1b98f0dd/setup-container/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.523368 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8efa0859-d909-40a3-8868-2cee1b98f0dd/rabbitmq/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.600579 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46/setup-container/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.811486 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46/setup-container/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.892914 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_84bf8e26-99e4-4a3d-ad0a-7b4ef8f27b46/rabbitmq/0.log" Jan 06 14:55:05 crc kubenswrapper[4869]: I0106 14:55:05.916169 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-lzj7z_45ba022e-05a0-419d-ae5a-4e77bfc47b8c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:06 crc kubenswrapper[4869]: I0106 14:55:06.118819 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-f4vz5_ab1d6597-5db5-4759-b339-5ad35fcdbd8a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:06 crc kubenswrapper[4869]: I0106 14:55:06.188802 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-g8qtp_90cfd369-16f0-4bf5-99df-8884d9db5240/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:06 crc kubenswrapper[4869]: I0106 14:55:06.404958 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hgcbw_c084a08a-3404-4e7e-b216-d2426f9e0a48/ssh-known-hosts-edpm-deployment/0.log" Jan 06 14:55:06 crc kubenswrapper[4869]: I0106 14:55:06.447086 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kwl6r_c0c8d127-5a60-4d66-8c61-1b430b37a374/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 06 14:55:08 crc kubenswrapper[4869]: I0106 14:55:08.537722 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_19bf085e-32cc-4a29-9a2f-ea0b8045c193/memcached/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.322450 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-5tjdn_9fceb23f-1f65-40c7-b8e9-3de1097ecee2/manager/1.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.396787 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-f6f74d6db-5tjdn_9fceb23f-1f65-40c7-b8e9-3de1097ecee2/manager/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.572480 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-2qx58_6e523183-ec1a-481e-822e-67c457b448c0/manager/1.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.612880 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-78979fc445-2qx58_6e523183-ec1a-481e-822e-67c457b448c0/manager/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.778300 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/util/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.933389 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/util/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.974811 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/pull/0.log" Jan 06 14:55:29 crc kubenswrapper[4869]: I0106 14:55:29.991791 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/pull/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.183289 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/extract/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.190925 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/util/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.207745 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_da363eef693d32039640351fa01e43705cef5afb78ec585cfc3c4bb565xjqdz_4bf904d2-df2b-4d07-b3ab-ed4881daeef4/pull/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.409168 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-g7gcq_4a2ad023-66f0-45bc-9bea-b64cca26c388/manager/1.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.410697 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66f8b87655-g7gcq_4a2ad023-66f0-45bc-9bea-b64cca26c388/manager/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.465776 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7596f46b97-l75w2_a9cad33b-8b9c-434b-9e28-f730ca0cba42/manager/1.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.640119 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-hcm2g_81a6ac18-5e57-4f17-a5b3-64b76e59f83b/manager/1.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.683558 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7596f46b97-l75w2_a9cad33b-8b9c-434b-9e28-f730ca0cba42/manager/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.695687 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-658dd65b86-hcm2g_81a6ac18-5e57-4f17-a5b3-64b76e59f83b/manager/0.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.871762 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-npl5f_9b55eca9-5342-4826-b2fd-3fe94520e1f2/manager/1.log" Jan 06 14:55:30 crc kubenswrapper[4869]: I0106 14:55:30.884384 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7f5ddd8d7b-npl5f_9b55eca9-5342-4826-b2fd-3fe94520e1f2/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.123606 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-t68w7_b295076d-930c-4a2b-9ba5-3cee1623e268/manager/1.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.185658 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g6xt2_d04195cb-3a00-4785-860d-8bb9537f42b7/manager/1.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.277850 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6d99759cf-t68w7_b295076d-930c-4a2b-9ba5-3cee1623e268/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.325706 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-f99f54bc8-g6xt2_d04195cb-3a00-4785-860d-8bb9537f42b7/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.399432 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c8fb65dbf-55rl9_4e8628c6-a97f-48ea-a91a-1ea5257c5e49/manager/1.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.530286 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-t4dkz_ea758643-2a27-40e6-8c7f-8b0020e0ad97/manager/1.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.570476 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c8fb65dbf-55rl9_4e8628c6-a97f-48ea-a91a-1ea5257c5e49/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.628636 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-598945d5b8-t4dkz_ea758643-2a27-40e6-8c7f-8b0020e0ad97/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.774906 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-pm7np_c634faec-64fc-4d2c-af70-94f85b6fcd59/manager/0.log" Jan 06 14:55:31 crc kubenswrapper[4869]: I0106 14:55:31.779213 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b88bfc995-pm7np_c634faec-64fc-4d2c-af70-94f85b6fcd59/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.006859 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-7lbqn_995201cd-f7dd-40a5-8854-192f32239e25/manager/0.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.035544 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7cd87b778f-7lbqn_995201cd-f7dd-40a5-8854-192f32239e25/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.054385 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-n78kg_c0aac0d5-701b-4a75-9bd0-4c9530692565/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.273955 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5fbbf8b6cc-n78kg_c0aac0d5-701b-4a75-9bd0-4c9530692565/manager/0.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.291882 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-r4ck9_e39be0e5-0e29-45cd-925b-6eafb2b385a9/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.304619 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-68c649d9d-r4ck9_e39be0e5-0e29-45cd-925b-6eafb2b385a9/manager/0.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.474804 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7s8247_da44c856-c228-45b1-947b-891308581bb6/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.511160 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-78948ddfd7s8247_da44c856-c228-45b1-947b-891308581bb6/manager/0.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.680008 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d77f59d59-zfch2_24ca9405-001a-4beb-a0fa-0f3775dab087/manager/1.log" Jan 06 14:55:32 crc kubenswrapper[4869]: I0106 14:55:32.825580 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-596cb89f89-kplkv_32b4a497-f056-4c29-890a-bb5616a79adf/operator/1.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.097902 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6lgb5_1ef55df8-93a5-440e-a53d-1c4b3eea7d0e/registry-server/0.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.160275 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-596cb89f89-kplkv_32b4a497-f056-4c29-890a-bb5616a79adf/operator/0.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.228287 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-wl9w7_ee1dd5c3-5e85-416c-933a-07fb51ec12d8/manager/1.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.369719 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bf6d4f946-wl9w7_ee1dd5c3-5e85-416c-933a-07fb51ec12d8/manager/0.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.695992 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-p249l_def35933-1964-4328-a9b2-dc9f72d11bcf/manager/1.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.796304 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-9b6f8f78c-p249l_def35933-1964-4328-a9b2-dc9f72d11bcf/manager/0.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.816475 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7d77f59d59-zfch2_24ca9405-001a-4beb-a0fa-0f3775dab087/manager/0.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.873518 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jtv5n_ff01227e-d9f4-4dd0-bc22-455a00294406/operator/1.log" Jan 06 14:55:33 crc kubenswrapper[4869]: I0106 14:55:33.919554 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jtv5n_ff01227e-d9f4-4dd0-bc22-455a00294406/operator/0.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.025404 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-5ltk8_3f4a328b-302b-496b-af2b-abec609682a6/manager/0.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.031618 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bb586bbf4-5ltk8_3f4a328b-302b-496b-af2b-abec609682a6/manager/1.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.134840 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-d2jnv_5427b0d1-29a3-47c0-9a1a-a945063ae129/manager/1.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.266118 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-68d988df55-d2jnv_5427b0d1-29a3-47c0-9a1a-a945063ae129/manager/0.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.296998 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-45sp6_7a343a23-f2df-474c-842c-f999f7d0e9b4/manager/1.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.344090 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6c866cfdcb-45sp6_7a343a23-f2df-474c-842c-f999f7d0e9b4/manager/0.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.468642 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-csthh_2ad69939-a56e-4589-bf4b-68fb8d42d7eb/manager/1.log" Jan 06 14:55:34 crc kubenswrapper[4869]: I0106 14:55:34.493405 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-9dbdf6486-csthh_2ad69939-a56e-4589-bf4b-68fb8d42d7eb/manager/0.log" Jan 06 14:55:53 crc kubenswrapper[4869]: I0106 14:55:53.645904 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f52zz_bfeea9a2-3239-4a04-a07e-7c0e0dd28bd2/control-plane-machine-set-operator/0.log" Jan 06 14:55:53 crc kubenswrapper[4869]: I0106 14:55:53.784702 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8t96r_7462c7be-1f9d-4f4b-a844-71a3518a27e2/kube-rbac-proxy/0.log" Jan 06 14:55:53 crc kubenswrapper[4869]: I0106 14:55:53.906710 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-8t96r_7462c7be-1f9d-4f4b-a844-71a3518a27e2/machine-api-operator/0.log" Jan 06 14:56:03 crc kubenswrapper[4869]: I0106 14:56:03.622968 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:56:03 crc kubenswrapper[4869]: I0106 14:56:03.623417 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:56:06 crc kubenswrapper[4869]: I0106 14:56:06.723692 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-mpc28_1d08b8f3-af4f-4dff-876f-53fe177523f0/cert-manager-controller/0.log" Jan 06 14:56:06 crc kubenswrapper[4869]: I0106 14:56:06.899064 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-622jv_34990624-9069-46d0-b8b5-03a8b37ef9ae/cert-manager-cainjector/0.log" Jan 06 14:56:06 crc kubenswrapper[4869]: I0106 14:56:06.987541 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-ht2rp_d899fe45-a78f-4d3b-af09-fc0eb97afd9a/cert-manager-webhook/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.023183 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-4g7qv_9cdcd0c2-0cc9-4382-bc15-4ea0fee95cf3/nmstate-console-plugin/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.208422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8jk6n_06d392f5-fbf0-4b8c-9e2b-1e99b64ff8b0/nmstate-handler/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.227198 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6fgd7_c0737021-333c-4fb1-a387-75dcff62515c/kube-rbac-proxy/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.394285 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6fgd7_c0737021-333c-4fb1-a387-75dcff62515c/nmstate-metrics/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.425943 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-4tzkd_372ffc0b-adac-4b19-82d9-c697aeebdc15/nmstate-operator/0.log" Jan 06 14:56:20 crc kubenswrapper[4869]: I0106 14:56:20.596167 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-jkxsj_483878fb-3dc5-49d2-8765-5f3cb8cbf8f2/nmstate-webhook/0.log" Jan 06 14:56:33 crc kubenswrapper[4869]: I0106 14:56:33.622083 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:56:33 crc kubenswrapper[4869]: I0106 14:56:33.622713 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:56:34 crc kubenswrapper[4869]: I0106 14:56:34.724838 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-2jc5d_ce1c6386-c701-4846-8a6c-e04c4057862e/kube-rbac-proxy/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.006351 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-2jc5d_ce1c6386-c701-4846-8a6c-e04c4057862e/controller/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.097461 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-frr-files/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.251777 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-reloader/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.277538 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-frr-files/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.294453 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-metrics/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.296892 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-reloader/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.475383 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-reloader/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.475990 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-frr-files/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.488857 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-metrics/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.512770 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-metrics/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.757441 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-reloader/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.764564 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-frr-files/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.767361 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/controller/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.771140 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/cp-metrics/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.952607 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/kube-rbac-proxy/0.log" Jan 06 14:56:35 crc kubenswrapper[4869]: I0106 14:56:35.959004 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/frr-metrics/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.002023 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/kube-rbac-proxy-frr/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.163912 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/reloader/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.258417 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-dzkl6_ea7e6385-475b-4452-bdd5-f83763ba1484/frr-k8s-webhook-server/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.621994 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6ccb949c7b-7jw65_81de01b0-a48a-4ca7-8509-9d12c5cb27da/manager/1.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.701805 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6ccb949c7b-7jw65_81de01b0-a48a-4ca7-8509-9d12c5cb27da/manager/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.837576 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-569fbf4bc-hnc5b_11640b38-620f-4dd4-b9b8-68c84cef4a48/webhook-server/0.log" Jan 06 14:56:36 crc kubenswrapper[4869]: I0106 14:56:36.987111 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-svd9m_a203c019-10b7-4654-9c6f-7e8f535f4a31/frr/0.log" Jan 06 14:56:37 crc kubenswrapper[4869]: I0106 14:56:37.044996 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6b2h_10efadba-cbe5-447f-8a14-768c3dbabe59/kube-rbac-proxy/0.log" Jan 06 14:56:37 crc kubenswrapper[4869]: I0106 14:56:37.355196 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6b2h_10efadba-cbe5-447f-8a14-768c3dbabe59/speaker/0.log" Jan 06 14:56:49 crc kubenswrapper[4869]: I0106 14:56:49.845940 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.059111 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.071216 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.071310 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.232030 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.291307 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/extract/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.322005 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d42nkg9_aa4f4cc1-1a1d-4dae-89cf-7d01b303ce56/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.546265 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.569181 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.589063 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.611398 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.757058 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/util/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.764099 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/pull/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.772008 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8fblhh_78bf3822-20b3-46ab-bdf0-ab7b83b17327/extract/0.log" Jan 06 14:56:50 crc kubenswrapper[4869]: I0106 14:56:50.895901 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-utilities/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.088820 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-utilities/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.107654 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.118826 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.303743 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-utilities/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.317139 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.502883 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-utilities/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.670255 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vjm9_83b564ad-004b-445b-8814-d4be0e085891/registry-server/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.784052 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-utilities/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.793878 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.813712 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.947005 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-content/0.log" Jan 06 14:56:51 crc kubenswrapper[4869]: I0106 14:56:51.980180 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/extract-utilities/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.205444 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cxrrl_604f390e-7e5d-4ec9-8d4f-ce230272186c/marketplace-operator/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.230744 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-utilities/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.404097 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-jph6b_1f2e5a2b-84ef-4926-9008-dec653a3c947/registry-server/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.493208 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-content/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.521293 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-content/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.528870 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-utilities/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.665913 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-utilities/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.681907 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/extract-content/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.847231 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-utilities/0.log" Jan 06 14:56:52 crc kubenswrapper[4869]: I0106 14:56:52.847422 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-99k25_d025cef5-8e65-4270-afc4-838c1a166ad6/registry-server/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.035077 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-content/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.054139 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-content/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.078132 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-utilities/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.242749 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-content/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.252784 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/extract-utilities/0.log" Jan 06 14:56:53 crc kubenswrapper[4869]: I0106 14:56:53.644348 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kb6kw_4e4dd706-de57-4440-8881-d5f18ea2506e/registry-server/0.log" Jan 06 14:57:03 crc kubenswrapper[4869]: I0106 14:57:03.623038 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:57:03 crc kubenswrapper[4869]: I0106 14:57:03.623517 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:57:03 crc kubenswrapper[4869]: I0106 14:57:03.623565 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 14:57:03 crc kubenswrapper[4869]: I0106 14:57:03.624278 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 14:57:03 crc kubenswrapper[4869]: I0106 14:57:03.624329 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489" gracePeriod=600 Jan 06 14:57:04 crc kubenswrapper[4869]: I0106 14:57:04.069811 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489" exitCode=0 Jan 06 14:57:04 crc kubenswrapper[4869]: I0106 14:57:04.069881 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489"} Jan 06 14:57:04 crc kubenswrapper[4869]: I0106 14:57:04.070390 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerStarted","Data":"8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f"} Jan 06 14:57:04 crc kubenswrapper[4869]: I0106 14:57:04.070419 4869 scope.go:117] "RemoveContainer" containerID="c4a9767e577ed8fd09578b7968be3e7a61dab0dfa8bf82f11c029989860bcb8d" Jan 06 14:57:15 crc kubenswrapper[4869]: E0106 14:57:15.078563 4869 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.230:51942->38.102.83.230:46819: write tcp 38.102.83.230:51942->38.102.83.230:46819: write: broken pipe Jan 06 14:58:30 crc kubenswrapper[4869]: I0106 14:58:30.053758 4869 generic.go:334] "Generic (PLEG): container finished" podID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerID="d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4" exitCode=0 Jan 06 14:58:30 crc kubenswrapper[4869]: I0106 14:58:30.053839 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" event={"ID":"b297df44-0f15-44cc-a32c-d64f3aa57eb9","Type":"ContainerDied","Data":"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4"} Jan 06 14:58:30 crc kubenswrapper[4869]: I0106 14:58:30.056242 4869 scope.go:117] "RemoveContainer" containerID="d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4" Jan 06 14:58:30 crc kubenswrapper[4869]: I0106 14:58:30.321445 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d4ljj_must-gather-l5pk9_b297df44-0f15-44cc-a32c-d64f3aa57eb9/gather/0.log" Jan 06 14:58:37 crc kubenswrapper[4869]: I0106 14:58:37.636759 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-d4ljj/must-gather-l5pk9"] Jan 06 14:58:37 crc kubenswrapper[4869]: I0106 14:58:37.637753 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="copy" containerID="cri-o://74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd" gracePeriod=2 Jan 06 14:58:37 crc kubenswrapper[4869]: I0106 14:58:37.646414 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-d4ljj/must-gather-l5pk9"] Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.070569 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d4ljj_must-gather-l5pk9_b297df44-0f15-44cc-a32c-d64f3aa57eb9/copy/0.log" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.071432 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.132880 4869 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-d4ljj_must-gather-l5pk9_b297df44-0f15-44cc-a32c-d64f3aa57eb9/copy/0.log" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.133227 4869 generic.go:334] "Generic (PLEG): container finished" podID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerID="74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd" exitCode=143 Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.133275 4869 scope.go:117] "RemoveContainer" containerID="74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.133378 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-d4ljj/must-gather-l5pk9" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.153808 4869 scope.go:117] "RemoveContainer" containerID="d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.165533 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output\") pod \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.165599 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx2cn\" (UniqueName: \"kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn\") pod \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\" (UID: \"b297df44-0f15-44cc-a32c-d64f3aa57eb9\") " Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.172593 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn" (OuterVolumeSpecName: "kube-api-access-qx2cn") pod "b297df44-0f15-44cc-a32c-d64f3aa57eb9" (UID: "b297df44-0f15-44cc-a32c-d64f3aa57eb9"). InnerVolumeSpecName "kube-api-access-qx2cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.230459 4869 scope.go:117] "RemoveContainer" containerID="74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd" Jan 06 14:58:38 crc kubenswrapper[4869]: E0106 14:58:38.230995 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd\": container with ID starting with 74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd not found: ID does not exist" containerID="74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.231036 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd"} err="failed to get container status \"74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd\": rpc error: code = NotFound desc = could not find container \"74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd\": container with ID starting with 74129d0d9a2c65addc5ac2f2964f232ffcc65575736124b856ef456ffe9671dd not found: ID does not exist" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.231059 4869 scope.go:117] "RemoveContainer" containerID="d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4" Jan 06 14:58:38 crc kubenswrapper[4869]: E0106 14:58:38.231411 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4\": container with ID starting with d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4 not found: ID does not exist" containerID="d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.231441 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4"} err="failed to get container status \"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4\": rpc error: code = NotFound desc = could not find container \"d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4\": container with ID starting with d7c543cad3109cae0267611a534a248e5ddd087912086db552dded8ad9f319a4 not found: ID does not exist" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.268383 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx2cn\" (UniqueName: \"kubernetes.io/projected/b297df44-0f15-44cc-a32c-d64f3aa57eb9-kube-api-access-qx2cn\") on node \"crc\" DevicePath \"\"" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.315363 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b297df44-0f15-44cc-a32c-d64f3aa57eb9" (UID: "b297df44-0f15-44cc-a32c-d64f3aa57eb9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:58:38 crc kubenswrapper[4869]: I0106 14:58:38.370723 4869 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b297df44-0f15-44cc-a32c-d64f3aa57eb9-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 06 14:58:39 crc kubenswrapper[4869]: I0106 14:58:39.718160 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" path="/var/lib/kubelet/pods/b297df44-0f15-44cc-a32c-d64f3aa57eb9/volumes" Jan 06 14:59:03 crc kubenswrapper[4869]: I0106 14:59:03.622077 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:59:03 crc kubenswrapper[4869]: I0106 14:59:03.623513 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.802686 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:08 crc kubenswrapper[4869]: E0106 14:59:08.803684 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="gather" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.803702 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="gather" Jan 06 14:59:08 crc kubenswrapper[4869]: E0106 14:59:08.803719 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="copy" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.803725 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="copy" Jan 06 14:59:08 crc kubenswrapper[4869]: E0106 14:59:08.803744 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" containerName="container-00" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.803753 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" containerName="container-00" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.803957 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="gather" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.803988 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1fd9bd-b480-4e25-8c08-b13c9e0c0a18" containerName="container-00" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.804004 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="b297df44-0f15-44cc-a32c-d64f3aa57eb9" containerName="copy" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.805476 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.814344 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.944834 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwwj\" (UniqueName: \"kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.945473 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:08 crc kubenswrapper[4869]: I0106 14:59:08.945558 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.048024 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwwj\" (UniqueName: \"kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.048117 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.048139 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.048738 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.048787 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.077544 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwwj\" (UniqueName: \"kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj\") pod \"certified-operators-t2wrs\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.138695 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.462897 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:09 crc kubenswrapper[4869]: I0106 14:59:09.513836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerStarted","Data":"7428d2f0f661f60c97fe250054b80df5ae019beccf9eba21d4e12fe1b6efa266"} Jan 06 14:59:10 crc kubenswrapper[4869]: I0106 14:59:10.522641 4869 generic.go:334] "Generic (PLEG): container finished" podID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerID="b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418" exitCode=0 Jan 06 14:59:10 crc kubenswrapper[4869]: I0106 14:59:10.522721 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerDied","Data":"b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418"} Jan 06 14:59:10 crc kubenswrapper[4869]: I0106 14:59:10.525069 4869 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 06 14:59:11 crc kubenswrapper[4869]: I0106 14:59:11.539132 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerStarted","Data":"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37"} Jan 06 14:59:12 crc kubenswrapper[4869]: I0106 14:59:12.556105 4869 generic.go:334] "Generic (PLEG): container finished" podID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerID="0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37" exitCode=0 Jan 06 14:59:12 crc kubenswrapper[4869]: I0106 14:59:12.556387 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerDied","Data":"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37"} Jan 06 14:59:13 crc kubenswrapper[4869]: I0106 14:59:13.567169 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerStarted","Data":"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70"} Jan 06 14:59:13 crc kubenswrapper[4869]: I0106 14:59:13.604204 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2wrs" podStartSLOduration=3.133058845 podStartE2EDuration="5.604169063s" podCreationTimestamp="2026-01-06 14:59:08 +0000 UTC" firstStartedPulling="2026-01-06 14:59:10.524768753 +0000 UTC m=+3569.064456417" lastFinishedPulling="2026-01-06 14:59:12.995878971 +0000 UTC m=+3571.535566635" observedRunningTime="2026-01-06 14:59:13.599364336 +0000 UTC m=+3572.139052010" watchObservedRunningTime="2026-01-06 14:59:13.604169063 +0000 UTC m=+3572.143856737" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.177915 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.181990 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.195193 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.292243 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.292332 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txdms\" (UniqueName: \"kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.292382 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.394439 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.394519 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txdms\" (UniqueName: \"kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.394545 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.395124 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.395146 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.413625 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txdms\" (UniqueName: \"kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms\") pod \"community-operators-rwzkk\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:16 crc kubenswrapper[4869]: I0106 14:59:16.500464 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:17 crc kubenswrapper[4869]: I0106 14:59:17.022871 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:17 crc kubenswrapper[4869]: W0106 14:59:17.029474 4869 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cdce579_219a_46fa_8fef_b3d47ed3005c.slice/crio-76c43173e522d5e6d4b69b09477cee040ae7b00fa3266ab2250d7358f06b3c24 WatchSource:0}: Error finding container 76c43173e522d5e6d4b69b09477cee040ae7b00fa3266ab2250d7358f06b3c24: Status 404 returned error can't find the container with id 76c43173e522d5e6d4b69b09477cee040ae7b00fa3266ab2250d7358f06b3c24 Jan 06 14:59:17 crc kubenswrapper[4869]: I0106 14:59:17.608836 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerStarted","Data":"76c43173e522d5e6d4b69b09477cee040ae7b00fa3266ab2250d7358f06b3c24"} Jan 06 14:59:18 crc kubenswrapper[4869]: I0106 14:59:18.617730 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerID="0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f" exitCode=0 Jan 06 14:59:18 crc kubenswrapper[4869]: I0106 14:59:18.617801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerDied","Data":"0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f"} Jan 06 14:59:19 crc kubenswrapper[4869]: I0106 14:59:19.139411 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:19 crc kubenswrapper[4869]: I0106 14:59:19.139588 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:19 crc kubenswrapper[4869]: I0106 14:59:19.190968 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:19 crc kubenswrapper[4869]: I0106 14:59:19.627068 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerStarted","Data":"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0"} Jan 06 14:59:19 crc kubenswrapper[4869]: I0106 14:59:19.674260 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:20 crc kubenswrapper[4869]: I0106 14:59:20.637107 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerID="dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0" exitCode=0 Jan 06 14:59:20 crc kubenswrapper[4869]: I0106 14:59:20.637182 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerDied","Data":"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0"} Jan 06 14:59:20 crc kubenswrapper[4869]: I0106 14:59:20.977765 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:21 crc kubenswrapper[4869]: I0106 14:59:21.659897 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerStarted","Data":"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30"} Jan 06 14:59:22 crc kubenswrapper[4869]: I0106 14:59:22.665895 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2wrs" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="registry-server" containerID="cri-o://46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70" gracePeriod=2 Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.132805 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.153400 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rwzkk" podStartSLOduration=4.496983167 podStartE2EDuration="7.153372021s" podCreationTimestamp="2026-01-06 14:59:16 +0000 UTC" firstStartedPulling="2026-01-06 14:59:18.621251927 +0000 UTC m=+3577.160939591" lastFinishedPulling="2026-01-06 14:59:21.277640781 +0000 UTC m=+3579.817328445" observedRunningTime="2026-01-06 14:59:21.68196271 +0000 UTC m=+3580.221650394" watchObservedRunningTime="2026-01-06 14:59:23.153372021 +0000 UTC m=+3581.693059685" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.232176 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwwj\" (UniqueName: \"kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj\") pod \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.232248 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities\") pod \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.232306 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content\") pod \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\" (UID: \"a06b5f8a-463a-4106-ad9f-c0834fb7331b\") " Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.233295 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities" (OuterVolumeSpecName: "utilities") pod "a06b5f8a-463a-4106-ad9f-c0834fb7331b" (UID: "a06b5f8a-463a-4106-ad9f-c0834fb7331b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.240356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj" (OuterVolumeSpecName: "kube-api-access-gfwwj") pod "a06b5f8a-463a-4106-ad9f-c0834fb7331b" (UID: "a06b5f8a-463a-4106-ad9f-c0834fb7331b"). InnerVolumeSpecName "kube-api-access-gfwwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.282356 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a06b5f8a-463a-4106-ad9f-c0834fb7331b" (UID: "a06b5f8a-463a-4106-ad9f-c0834fb7331b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.334472 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwwj\" (UniqueName: \"kubernetes.io/projected/a06b5f8a-463a-4106-ad9f-c0834fb7331b-kube-api-access-gfwwj\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.334517 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.334528 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a06b5f8a-463a-4106-ad9f-c0834fb7331b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.677556 4869 generic.go:334] "Generic (PLEG): container finished" podID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerID="46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70" exitCode=0 Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.677593 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerDied","Data":"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70"} Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.677639 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2wrs" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.677655 4869 scope.go:117] "RemoveContainer" containerID="46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.677643 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2wrs" event={"ID":"a06b5f8a-463a-4106-ad9f-c0834fb7331b","Type":"ContainerDied","Data":"7428d2f0f661f60c97fe250054b80df5ae019beccf9eba21d4e12fe1b6efa266"} Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.717476 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.719210 4869 scope.go:117] "RemoveContainer" containerID="0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.722572 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2wrs"] Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.744541 4869 scope.go:117] "RemoveContainer" containerID="b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.797291 4869 scope.go:117] "RemoveContainer" containerID="46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70" Jan 06 14:59:23 crc kubenswrapper[4869]: E0106 14:59:23.798713 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70\": container with ID starting with 46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70 not found: ID does not exist" containerID="46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.798766 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70"} err="failed to get container status \"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70\": rpc error: code = NotFound desc = could not find container \"46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70\": container with ID starting with 46d07f6fe77ef85a8a33080fac824fbd01f42efbf3d2d5440a96899b1dd55a70 not found: ID does not exist" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.798798 4869 scope.go:117] "RemoveContainer" containerID="0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37" Jan 06 14:59:23 crc kubenswrapper[4869]: E0106 14:59:23.799321 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37\": container with ID starting with 0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37 not found: ID does not exist" containerID="0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.799358 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37"} err="failed to get container status \"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37\": rpc error: code = NotFound desc = could not find container \"0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37\": container with ID starting with 0c3c422513f25195f913f000743167fb71c4f15f7eb9c3cfafaf95746140ad37 not found: ID does not exist" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.799397 4869 scope.go:117] "RemoveContainer" containerID="b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418" Jan 06 14:59:23 crc kubenswrapper[4869]: E0106 14:59:23.799631 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418\": container with ID starting with b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418 not found: ID does not exist" containerID="b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418" Jan 06 14:59:23 crc kubenswrapper[4869]: I0106 14:59:23.799677 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418"} err="failed to get container status \"b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418\": rpc error: code = NotFound desc = could not find container \"b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418\": container with ID starting with b36731a678493638609be210f1f6ae554cae8f4d8b1fe9459ab86463aef8d418 not found: ID does not exist" Jan 06 14:59:25 crc kubenswrapper[4869]: I0106 14:59:25.722050 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" path="/var/lib/kubelet/pods/a06b5f8a-463a-4106-ad9f-c0834fb7331b/volumes" Jan 06 14:59:26 crc kubenswrapper[4869]: I0106 14:59:26.501490 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:26 crc kubenswrapper[4869]: I0106 14:59:26.502437 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:26 crc kubenswrapper[4869]: I0106 14:59:26.557113 4869 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:26 crc kubenswrapper[4869]: I0106 14:59:26.786523 4869 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:27 crc kubenswrapper[4869]: I0106 14:59:27.769924 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:28 crc kubenswrapper[4869]: I0106 14:59:28.724753 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rwzkk" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="registry-server" containerID="cri-o://eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30" gracePeriod=2 Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.659874 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.754038 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities\") pod \"1cdce579-219a-46fa-8fef-b3d47ed3005c\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.754250 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txdms\" (UniqueName: \"kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms\") pod \"1cdce579-219a-46fa-8fef-b3d47ed3005c\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.754341 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content\") pod \"1cdce579-219a-46fa-8fef-b3d47ed3005c\" (UID: \"1cdce579-219a-46fa-8fef-b3d47ed3005c\") " Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.758973 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities" (OuterVolumeSpecName: "utilities") pod "1cdce579-219a-46fa-8fef-b3d47ed3005c" (UID: "1cdce579-219a-46fa-8fef-b3d47ed3005c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.760352 4869 generic.go:334] "Generic (PLEG): container finished" podID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerID="eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30" exitCode=0 Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.760402 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerDied","Data":"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30"} Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.760431 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwzkk" event={"ID":"1cdce579-219a-46fa-8fef-b3d47ed3005c","Type":"ContainerDied","Data":"76c43173e522d5e6d4b69b09477cee040ae7b00fa3266ab2250d7358f06b3c24"} Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.760437 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwzkk" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.760463 4869 scope.go:117] "RemoveContainer" containerID="eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.772022 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms" (OuterVolumeSpecName: "kube-api-access-txdms") pod "1cdce579-219a-46fa-8fef-b3d47ed3005c" (UID: "1cdce579-219a-46fa-8fef-b3d47ed3005c"). InnerVolumeSpecName "kube-api-access-txdms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.814734 4869 scope.go:117] "RemoveContainer" containerID="dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.830220 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cdce579-219a-46fa-8fef-b3d47ed3005c" (UID: "1cdce579-219a-46fa-8fef-b3d47ed3005c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.838720 4869 scope.go:117] "RemoveContainer" containerID="0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.858790 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txdms\" (UniqueName: \"kubernetes.io/projected/1cdce579-219a-46fa-8fef-b3d47ed3005c-kube-api-access-txdms\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.858823 4869 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.858833 4869 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cdce579-219a-46fa-8fef-b3d47ed3005c-utilities\") on node \"crc\" DevicePath \"\"" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.878094 4869 scope.go:117] "RemoveContainer" containerID="eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30" Jan 06 14:59:29 crc kubenswrapper[4869]: E0106 14:59:29.878534 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30\": container with ID starting with eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30 not found: ID does not exist" containerID="eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.878565 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30"} err="failed to get container status \"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30\": rpc error: code = NotFound desc = could not find container \"eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30\": container with ID starting with eaeff106663036624acc68966e9958b70a1bd13e9a9106645cc2365ab27b2b30 not found: ID does not exist" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.878585 4869 scope.go:117] "RemoveContainer" containerID="dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0" Jan 06 14:59:29 crc kubenswrapper[4869]: E0106 14:59:29.879129 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0\": container with ID starting with dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0 not found: ID does not exist" containerID="dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.879174 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0"} err="failed to get container status \"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0\": rpc error: code = NotFound desc = could not find container \"dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0\": container with ID starting with dbc95138f75e77d0c6cff664543ba26b605f0800800698d0a6c1f4d29058b0c0 not found: ID does not exist" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.879196 4869 scope.go:117] "RemoveContainer" containerID="0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f" Jan 06 14:59:29 crc kubenswrapper[4869]: E0106 14:59:29.879461 4869 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f\": container with ID starting with 0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f not found: ID does not exist" containerID="0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f" Jan 06 14:59:29 crc kubenswrapper[4869]: I0106 14:59:29.879488 4869 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f"} err="failed to get container status \"0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f\": rpc error: code = NotFound desc = could not find container \"0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f\": container with ID starting with 0a0a52692e8f9c82777c5c2d603766ec0b24d2d2de4e85812b4640993e14406f not found: ID does not exist" Jan 06 14:59:30 crc kubenswrapper[4869]: I0106 14:59:30.095311 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:30 crc kubenswrapper[4869]: I0106 14:59:30.103675 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rwzkk"] Jan 06 14:59:31 crc kubenswrapper[4869]: I0106 14:59:31.718334 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" path="/var/lib/kubelet/pods/1cdce579-219a-46fa-8fef-b3d47ed3005c/volumes" Jan 06 14:59:33 crc kubenswrapper[4869]: I0106 14:59:33.622115 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 14:59:33 crc kubenswrapper[4869]: I0106 14:59:33.622437 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 14:59:49 crc kubenswrapper[4869]: I0106 14:59:49.638950 4869 scope.go:117] "RemoveContainer" containerID="64be19045234c0ac1f684977742c8045985ac3ed623d6719653850456f2b1544" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.178504 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g"] Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179480 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="extract-utilities" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179500 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="extract-utilities" Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179514 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="extract-utilities" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179523 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="extract-utilities" Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179543 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="extract-content" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179551 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="extract-content" Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179572 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="extract-content" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179580 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="extract-content" Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179590 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179597 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: E0106 15:00:00.179613 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179620 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179839 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="a06b5f8a-463a-4106-ad9f-c0834fb7331b" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.179867 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cdce579-219a-46fa-8fef-b3d47ed3005c" containerName="registry-server" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.180578 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.182629 4869 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.185635 4869 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.186479 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g"] Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.281102 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.281275 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.281443 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72mtw\" (UniqueName: \"kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.383481 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.383930 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.383983 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72mtw\" (UniqueName: \"kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.384577 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.390313 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.401240 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72mtw\" (UniqueName: \"kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw\") pod \"collect-profiles-29461860-5jf8g\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.499446 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:00 crc kubenswrapper[4869]: I0106 15:00:00.729468 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g"] Jan 06 15:00:01 crc kubenswrapper[4869]: I0106 15:00:01.039967 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" event={"ID":"37b58f6f-34e3-4b29-86b0-85a6bd06a57d","Type":"ContainerStarted","Data":"21501c4da864e76423eab9cc23344f0945f97b2098b16f3a50690eca6486141e"} Jan 06 15:00:01 crc kubenswrapper[4869]: I0106 15:00:01.040005 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" event={"ID":"37b58f6f-34e3-4b29-86b0-85a6bd06a57d","Type":"ContainerStarted","Data":"b95f193bdcae5dee3a52ce54d2bbc048a3d7c297feec6c1081533cbba60e4ada"} Jan 06 15:00:01 crc kubenswrapper[4869]: I0106 15:00:01.059354 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" podStartSLOduration=1.059334417 podStartE2EDuration="1.059334417s" podCreationTimestamp="2026-01-06 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 15:00:01.0549636 +0000 UTC m=+3619.594651264" watchObservedRunningTime="2026-01-06 15:00:01.059334417 +0000 UTC m=+3619.599022081" Jan 06 15:00:02 crc kubenswrapper[4869]: I0106 15:00:02.051242 4869 generic.go:334] "Generic (PLEG): container finished" podID="37b58f6f-34e3-4b29-86b0-85a6bd06a57d" containerID="21501c4da864e76423eab9cc23344f0945f97b2098b16f3a50690eca6486141e" exitCode=0 Jan 06 15:00:02 crc kubenswrapper[4869]: I0106 15:00:02.051339 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" event={"ID":"37b58f6f-34e3-4b29-86b0-85a6bd06a57d","Type":"ContainerDied","Data":"21501c4da864e76423eab9cc23344f0945f97b2098b16f3a50690eca6486141e"} Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.365951 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.439847 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72mtw\" (UniqueName: \"kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw\") pod \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.439989 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume\") pod \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.440042 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume\") pod \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\" (UID: \"37b58f6f-34e3-4b29-86b0-85a6bd06a57d\") " Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.441094 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume" (OuterVolumeSpecName: "config-volume") pod "37b58f6f-34e3-4b29-86b0-85a6bd06a57d" (UID: "37b58f6f-34e3-4b29-86b0-85a6bd06a57d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.445718 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "37b58f6f-34e3-4b29-86b0-85a6bd06a57d" (UID: "37b58f6f-34e3-4b29-86b0-85a6bd06a57d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.448622 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw" (OuterVolumeSpecName: "kube-api-access-72mtw") pod "37b58f6f-34e3-4b29-86b0-85a6bd06a57d" (UID: "37b58f6f-34e3-4b29-86b0-85a6bd06a57d"). InnerVolumeSpecName "kube-api-access-72mtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.541658 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72mtw\" (UniqueName: \"kubernetes.io/projected/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-kube-api-access-72mtw\") on node \"crc\" DevicePath \"\"" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.541949 4869 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.541959 4869 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b58f6f-34e3-4b29-86b0-85a6bd06a57d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.622091 4869 patch_prober.go:28] interesting pod/machine-config-daemon-kt9df container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.622149 4869 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.622194 4869 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.623003 4869 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f"} pod="openshift-machine-config-operator/machine-config-daemon-kt9df" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 06 15:00:03 crc kubenswrapper[4869]: I0106 15:00:03.623065 4869 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerName="machine-config-daemon" containerID="cri-o://8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" gracePeriod=600 Jan 06 15:00:03 crc kubenswrapper[4869]: E0106 15:00:03.853261 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.069933 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.069954 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29461860-5jf8g" event={"ID":"37b58f6f-34e3-4b29-86b0-85a6bd06a57d","Type":"ContainerDied","Data":"b95f193bdcae5dee3a52ce54d2bbc048a3d7c297feec6c1081533cbba60e4ada"} Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.070027 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95f193bdcae5dee3a52ce54d2bbc048a3d7c297feec6c1081533cbba60e4ada" Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.072362 4869 generic.go:334] "Generic (PLEG): container finished" podID="89b72572-a31b-48f1-93f4-cbfad03736b1" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" exitCode=0 Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.072439 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" event={"ID":"89b72572-a31b-48f1-93f4-cbfad03736b1","Type":"ContainerDied","Data":"8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f"} Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.072644 4869 scope.go:117] "RemoveContainer" containerID="b2931da1dc48569fc1d9b1b3f1e0812f52d961821796eb9a8b76abca6a174489" Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.073210 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:00:04 crc kubenswrapper[4869]: E0106 15:00:04.073466 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.432386 4869 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx"] Jan 06 15:00:04 crc kubenswrapper[4869]: I0106 15:00:04.438986 4869 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29461815-dzrfx"] Jan 06 15:00:05 crc kubenswrapper[4869]: I0106 15:00:05.715368 4869 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c202d00c-2db6-42a5-bf18-fb6297a6dd17" path="/var/lib/kubelet/pods/c202d00c-2db6-42a5-bf18-fb6297a6dd17/volumes" Jan 06 15:00:18 crc kubenswrapper[4869]: I0106 15:00:18.704322 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:00:18 crc kubenswrapper[4869]: E0106 15:00:18.705018 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:00:30 crc kubenswrapper[4869]: I0106 15:00:30.705275 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:00:30 crc kubenswrapper[4869]: E0106 15:00:30.706149 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:00:43 crc kubenswrapper[4869]: I0106 15:00:43.704560 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:00:43 crc kubenswrapper[4869]: E0106 15:00:43.705248 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:00:49 crc kubenswrapper[4869]: I0106 15:00:49.744889 4869 scope.go:117] "RemoveContainer" containerID="c07a52c832bbed6ebaa9ffa80812486ebc8474dab2b80bff99fd352d2fd155d1" Jan 06 15:00:57 crc kubenswrapper[4869]: I0106 15:00:57.704468 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:00:57 crc kubenswrapper[4869]: E0106 15:00:57.705228 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.168949 4869 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29461861-fq5b4"] Jan 06 15:01:00 crc kubenswrapper[4869]: E0106 15:01:00.170190 4869 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b58f6f-34e3-4b29-86b0-85a6bd06a57d" containerName="collect-profiles" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.170218 4869 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b58f6f-34e3-4b29-86b0-85a6bd06a57d" containerName="collect-profiles" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.170571 4869 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b58f6f-34e3-4b29-86b0-85a6bd06a57d" containerName="collect-profiles" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.171520 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.192691 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29461861-fq5b4"] Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.287881 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.287958 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95wt\" (UniqueName: \"kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.288004 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.288023 4869 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.389352 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.389436 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p95wt\" (UniqueName: \"kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.389482 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.389507 4869 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.399062 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.403316 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.405618 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.410351 4869 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p95wt\" (UniqueName: \"kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt\") pod \"keystone-cron-29461861-fq5b4\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.500913 4869 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:00 crc kubenswrapper[4869]: I0106 15:01:00.906862 4869 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29461861-fq5b4"] Jan 06 15:01:01 crc kubenswrapper[4869]: I0106 15:01:01.519801 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29461861-fq5b4" event={"ID":"14fb61c6-8e3c-46db-8371-1dce4d11a726","Type":"ContainerStarted","Data":"a422c90a5088542094b3897e439a9a30bc08e19abbb71e082e1df02f24982c89"} Jan 06 15:01:01 crc kubenswrapper[4869]: I0106 15:01:01.519860 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29461861-fq5b4" event={"ID":"14fb61c6-8e3c-46db-8371-1dce4d11a726","Type":"ContainerStarted","Data":"22cd0cae905585b55c8a13a07fbcdaf92b1b7d7d9d6bf02159fe2c5574ec1d23"} Jan 06 15:01:01 crc kubenswrapper[4869]: I0106 15:01:01.539382 4869 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29461861-fq5b4" podStartSLOduration=1.5393579210000001 podStartE2EDuration="1.539357921s" podCreationTimestamp="2026-01-06 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-06 15:01:01.533514599 +0000 UTC m=+3680.073202293" watchObservedRunningTime="2026-01-06 15:01:01.539357921 +0000 UTC m=+3680.079045585" Jan 06 15:01:03 crc kubenswrapper[4869]: I0106 15:01:03.540331 4869 generic.go:334] "Generic (PLEG): container finished" podID="14fb61c6-8e3c-46db-8371-1dce4d11a726" containerID="a422c90a5088542094b3897e439a9a30bc08e19abbb71e082e1df02f24982c89" exitCode=0 Jan 06 15:01:03 crc kubenswrapper[4869]: I0106 15:01:03.540907 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29461861-fq5b4" event={"ID":"14fb61c6-8e3c-46db-8371-1dce4d11a726","Type":"ContainerDied","Data":"a422c90a5088542094b3897e439a9a30bc08e19abbb71e082e1df02f24982c89"} Jan 06 15:01:04 crc kubenswrapper[4869]: I0106 15:01:04.931181 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.096160 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data\") pod \"14fb61c6-8e3c-46db-8371-1dce4d11a726\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.096228 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle\") pod \"14fb61c6-8e3c-46db-8371-1dce4d11a726\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.096276 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys\") pod \"14fb61c6-8e3c-46db-8371-1dce4d11a726\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.096308 4869 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95wt\" (UniqueName: \"kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt\") pod \"14fb61c6-8e3c-46db-8371-1dce4d11a726\" (UID: \"14fb61c6-8e3c-46db-8371-1dce4d11a726\") " Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.102697 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "14fb61c6-8e3c-46db-8371-1dce4d11a726" (UID: "14fb61c6-8e3c-46db-8371-1dce4d11a726"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.107908 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt" (OuterVolumeSpecName: "kube-api-access-p95wt") pod "14fb61c6-8e3c-46db-8371-1dce4d11a726" (UID: "14fb61c6-8e3c-46db-8371-1dce4d11a726"). InnerVolumeSpecName "kube-api-access-p95wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.131801 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14fb61c6-8e3c-46db-8371-1dce4d11a726" (UID: "14fb61c6-8e3c-46db-8371-1dce4d11a726"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.152557 4869 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data" (OuterVolumeSpecName: "config-data") pod "14fb61c6-8e3c-46db-8371-1dce4d11a726" (UID: "14fb61c6-8e3c-46db-8371-1dce4d11a726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.200391 4869 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-config-data\") on node \"crc\" DevicePath \"\"" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.200439 4869 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.200457 4869 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/14fb61c6-8e3c-46db-8371-1dce4d11a726-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.200468 4869 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p95wt\" (UniqueName: \"kubernetes.io/projected/14fb61c6-8e3c-46db-8371-1dce4d11a726-kube-api-access-p95wt\") on node \"crc\" DevicePath \"\"" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.556163 4869 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29461861-fq5b4" event={"ID":"14fb61c6-8e3c-46db-8371-1dce4d11a726","Type":"ContainerDied","Data":"22cd0cae905585b55c8a13a07fbcdaf92b1b7d7d9d6bf02159fe2c5574ec1d23"} Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.556215 4869 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22cd0cae905585b55c8a13a07fbcdaf92b1b7d7d9d6bf02159fe2c5574ec1d23" Jan 06 15:01:05 crc kubenswrapper[4869]: I0106 15:01:05.556272 4869 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29461861-fq5b4" Jan 06 15:01:08 crc kubenswrapper[4869]: I0106 15:01:08.704012 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:01:08 crc kubenswrapper[4869]: E0106 15:01:08.704732 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:01:20 crc kubenswrapper[4869]: I0106 15:01:20.703983 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:01:20 crc kubenswrapper[4869]: E0106 15:01:20.704859 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:01:34 crc kubenswrapper[4869]: I0106 15:01:34.704827 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:01:34 crc kubenswrapper[4869]: E0106 15:01:34.707156 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:01:48 crc kubenswrapper[4869]: I0106 15:01:48.704986 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:01:48 crc kubenswrapper[4869]: E0106 15:01:48.705841 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:02:02 crc kubenswrapper[4869]: I0106 15:02:02.705257 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:02:02 crc kubenswrapper[4869]: E0106 15:02:02.706873 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:02:14 crc kubenswrapper[4869]: I0106 15:02:14.705026 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:02:14 crc kubenswrapper[4869]: E0106 15:02:14.705735 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1" Jan 06 15:02:27 crc kubenswrapper[4869]: I0106 15:02:27.704875 4869 scope.go:117] "RemoveContainer" containerID="8d1323dd7d8e0827a76a324801994e132e4d7120f0b885c1a5f5127ee412227f" Jan 06 15:02:27 crc kubenswrapper[4869]: E0106 15:02:27.705760 4869 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kt9df_openshift-machine-config-operator(89b72572-a31b-48f1-93f4-cbfad03736b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-kt9df" podUID="89b72572-a31b-48f1-93f4-cbfad03736b1"